WO2023179705A1 - 编码、解码方法、装置及设备 - Google Patents

编码、解码方法、装置及设备 Download PDF

Info

Publication number
WO2023179705A1
WO2023179705A1 PCT/CN2023/083347 CN2023083347W WO2023179705A1 WO 2023179705 A1 WO2023179705 A1 WO 2023179705A1 CN 2023083347 W CN2023083347 W CN 2023083347W WO 2023179705 A1 WO2023179705 A1 WO 2023179705A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
quantization
precision
geometric
Prior art date
Application number
PCT/CN2023/083347
Other languages
English (en)
French (fr)
Inventor
邹文杰
张伟
杨付正
吕卓逸
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023179705A1 publication Critical patent/WO2023179705A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • This application belongs to the field of coding and decoding technology, and specifically relates to a coding and decoding method, device and equipment.
  • Three-dimensional mesh can be considered the most popular representation method of three-dimensional models in the past many years, and it plays an important role in many applications. Its expression is simple, so it is widely integrated into the graphics processing unit (GPU) of computers, tablets and smartphones with hardware algorithms, specifically used to render three-dimensional meshes.
  • GPU graphics processing unit
  • 3D mesh geometry information can be compressed using point cloud compression algorithms.
  • point cloud compression algorithm is used to compress the geometric information of the three-dimensional mesh model, but the compression efficiency is not high.
  • Embodiments of the present application provide an encoding and decoding method, device and equipment, which can solve the problem of low compression efficiency in the existing compression methods for three-dimensional grid geometric information.
  • the first aspect provides an encoding method, including:
  • the encoding end performs quantization processing on the geometric information of the target three-dimensional grid according to the first quantization parameter to obtain first information.
  • the first information includes at least one of the following: first precision geometric information, second precision geometric information, The information of the supplementary point; the information of the supplementary point includes the fourth precision geometric information of the supplementary point, and the fourth precision geometric information is the three-dimensional coordinate information lost during the quantification process of the supplementary point;
  • the encoding end performs quantization processing on the first part of the geometric information in the first information according to the second quantization parameter, and the first part of the geometric information includes at least one of second precision geometric information and fourth precision geometric information of the supplementary point. item;
  • the encoding end encodes the quantized first information and quantized information, where the quantized information includes first quantized information used to indicate the first quantized parameter and second quantized information used to indicate the second quantized parameter. ;
  • the first precision geometric information is the quantized geometric information of the target three-dimensional grid
  • the second precision The geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the supplementary point information is the information of points generated during the quantization process that require additional processing.
  • the second aspect provides a decoding method, including:
  • the decoding end decodes the obtained code stream to obtain quantized information and first information.
  • the first information includes at least one of the following: first precision geometric information, second precision geometric information, and supplementary point information.
  • the supplementary point includes fourth precision geometric information of the supplementary point, which is the three-dimensional coordinate information lost during the quantization process of the supplementary point;
  • the quantization information includes a parameter used to indicate the first quantization first quantization information and second quantization information indicating a second quantization parameter.
  • the second quantization parameter is a quantization parameter that quantizes the first part of the geometric information in the first information.
  • the information includes at least one of second precision geometric information and fourth precision geometric information of the supplementary point;
  • the decoder performs inverse quantization processing on the first information according to the quantized information to obtain the target three-dimensional grid;
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • an encoding device including:
  • a first processing module configured to perform quantification processing on the geometric information of the target three-dimensional grid according to the first quantization parameter to obtain first information, where the first information includes at least one of the following: first precision geometric information, third Two-precision geometric information and supplementary point information; the supplementary point information includes fourth-precision geometric information of the supplementary point, and the fourth-precision geometric information is the three-dimensional coordinate information lost during the quantification process of the supplementary point;
  • the second processing module is configured to perform quantization processing on the first part of the geometric information in the first information according to the second quantization parameter.
  • the first part of the geometric information includes the second precision geometric information and the fourth precision geometric information of the supplementary point. at least one of;
  • a first encoding module configured to encode the quantized first information and quantized information, where the quantized information includes first quantized information indicating the first quantization parameter and a third quantized parameter indicating the second quantization parameter.
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • a decoding device including:
  • the third processing module is used to decode the obtained code stream and obtain quantized information and first information.
  • the first information includes at least one of the following: first precision geometric information, second precision geometric information, and supplementary point information.
  • Information the information of the supplementary point includes the fourth precision geometric information of the supplementary point, the fourth precision geometric information is the three-dimensional coordinate information lost during the quantification process of the supplementary point;
  • the quantification information includes the information used to indicate the quantification of the supplementary point.
  • first quantization information of the first quantization parameter and second quantization information indicating a second quantization parameter, where the second quantization parameter is a quantization parameter that quantizes the first part of the geometric information in the first information,
  • the first part of the geometric information includes at least one of second precision geometric information and fourth precision geometric information of the supplementary point;
  • a fourth processing module configured to perform inverse quantization processing on the first information according to the quantified information to obtain the target three-dimensional grid
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • a coding device including a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor. When the program or instructions are executed by the processor, the first The steps of the method described in this aspect.
  • an encoding device including a processor and a communication interface, wherein the processor is configured to perform quantization processing on the geometric information of the target three-dimensional grid according to the first quantization parameter to obtain the first information
  • the first information includes at least one of the following: first precision geometric information, second precision geometric information, and supplementary point information; the supplementary point information includes fourth precision geometric information of the supplementary point, and the fourth The precision geometric information is the three-dimensional coordinate information lost during the quantization process of the supplementary point; the first part of the geometric information in the first information is quantized according to the second quantization parameter, and the first part of the geometric information includes the second precision geometric information and at least one item of the fourth precision geometric information of the supplementary point; encoding the quantized first information and the quantized information, the quantized information including the first quantized information used to indicate the first quantized parameter and the second quantization information indicating the second quantization parameter;
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • a decoding device including a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the program or instructions are executed by the processor, the second process is implemented. The steps of the method described in this aspect.
  • a decoding device including a processor and a communication interface, wherein the processor is configured to decode the obtained code stream and obtain quantization information and first information, where the first information includes at least the following: One item: first-precision geometric information, second-precision geometric information, and supplementary point information.
  • the supplementary point information includes the fourth-precision geometric information of the supplementary point.
  • the fourth-precision geometric information is the supplementary point at the time being Three-dimensional coordinate information lost during the quantization process;
  • the quantization information includes first quantization information used to indicate the first quantization parameter and second quantization information used to indicate the second quantization parameter,
  • the second quantization parameter is Quantization parameters for quantization processing of the first part of the geometric information in the first information, the first part of the geometric information including at least one of the second precision geometric information and the fourth precision geometric information of the supplementary point; according to the quantized information , perform inverse quantization processing on the first information to obtain the target three-dimensional grid;
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • a communication system including: an encoding device and a decoding device, where the encoding device can be used to To perform the steps of the method as described in the first aspect, the decoding device may be used to perform the steps of the method as described in the second aspect.
  • a readable storage medium In a tenth aspect, a readable storage medium is provided. Programs or instructions are stored on the readable storage medium. When the programs or instructions are executed by a processor, the steps of the method described in the first aspect are implemented, or the steps of the method are implemented as described in the first aspect. The steps of the method described in the second aspect.
  • a chip in an eleventh aspect, includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the method described in the first aspect. method, or implement a method as described in the second aspect.
  • a computer program/program product is provided, the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement as described in the first aspect
  • the geometric information of the three-dimensional grid is quantified through the first quantization parameter, so that the spacing between the vertices of the three-dimensional grid is reduced after quantization, thereby reducing the spacing between the two-dimensional vertices after projection, which can improve
  • the compression efficiency of the geometric information of the three-dimensional grid, and the quantization process of the first part of the geometric information (ie, high-precision geometric information) in the first information through the second quantization parameter can effectively control the number of bits of the high-precision geometric information, and thus can Effectively control encoding quality.
  • Figure 1 is a schematic flow chart of the encoding method according to the embodiment of the present application.
  • Figure 2 is a schematic diagram of the fine division process based on grid
  • Figure 3 is a schematic diagram of the eight directions of patch arrangement
  • Figure 4 is a schematic diagram of the encoding process of high-precision geometric information
  • Figure 5 is a schematic diagram of raw patch
  • Figure 6 is a schematic diagram of the video-based three-dimensional grid geometric information encoding framework
  • Figure 7 is a schematic module diagram of an encoding device according to an embodiment of the present application.
  • Figure 8 is a schematic structural diagram of an encoding device according to an embodiment of the present application.
  • Figure 9 is a schematic flow chart of the decoding method according to the embodiment of the present application.
  • Figure 10 is a block diagram of geometric information reconstruction
  • Figure 11 is a schematic diagram of the video-based three-dimensional grid geometric information decoding framework
  • Figure 12 is a schematic module diagram of a decoding device according to an embodiment of the present application.
  • Figure 13 is a schematic structural diagram of a communication device according to an embodiment of the present application.
  • first, second, etc. in the description and claims of this application are used to distinguish similar objects and are not used to describe a specific order or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and that "first" and “second” are distinguished objects It is usually one type, and the number of objects is not limited.
  • the first object can be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
  • LTE Long Term Evolution
  • LTE-Advanced, LTE-A Long Term Evolution
  • LTE-A Long Term Evolution
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency Division Multiple Access
  • NR New Radio
  • this embodiment of the present application provides an encoding method, including:
  • Step 101 The encoding end performs quantization processing on the geometric information of the target three-dimensional grid according to the first quantization parameter to obtain the first information.
  • the target three-dimensional grid can be understood as the three-dimensional grid corresponding to any video frame.
  • the geometric information of the target three-dimensional grid can be understood as the coordinates of the vertices in the three-dimensional grid. These coordinates usually refer to the three-dimensional coordinates. .
  • the first information includes at least one of the following:
  • the first precision geometric information can be understood as low-precision geometric information, that is, the low-precision geometric information refers to the quantized geometric information of the target three-dimensional grid, that is, the three-dimensional coordinates of each vertex included in the quantized target three-dimensional grid. coordinate information.
  • the second precision geometric information can be understood as high-precision geometric information, and the high-precision geometric information can be regarded as geometric information lost in the quantization process, that is, lost three-dimensional coordinate information.
  • the information of supplementary points refers to the information of points generated during the quantification process that require additional processing, that is, That is to say, the supplementary points are points generated during the quantization process that require additional processing, for example, repeated points with overlapping coordinate positions, etc.
  • the vertices with overlapping coordinate positions during quantization can be inversely quantized. and then return to its original position.
  • the supplementary point information includes at least one of the following:
  • the low-precision geometric information of the supplementary points can be determined through the index of the vertices.
  • the third-precision geometric information can be understood as low-precision geometric information of the supplementary points, that is, the quantized three-dimensional coordinate information of the supplementary points.
  • the fourth precision geometric information can be understood as the high-precision geometric information of the supplementary point, that is, the three-dimensional coordinate information of the supplementary point that is lost during the quantization process.
  • the hidden points after quantization can be determined through A131 and A133 or A132 and A133.
  • Step 102 The encoding end performs quantization processing on the first part of the geometric information in the first information according to the second quantization parameter.
  • the first part of the geometric information includes the second precision geometric information and the fourth precision geometric information of the supplementary point. At least one item, the fourth precision geometric information is to supplement the three-dimensional coordinate information lost during the quantization process.
  • the high-precision information and/or the high-precision information of the supplementary points can be further quantized, so as to effectively improve the compression efficiency under an acceptable degree of distortion.
  • Step 103 The encoding end encodes the quantized first information and quantization information.
  • the quantization information includes the first quantization information used to indicate the first quantization parameter and the third quantization information used to indicate the second quantization parameter. 2. Quantitative information;
  • the first quantization information here may be the first quantization parameter, or the first offset value of the first quantization parameter relative to the first reference quantization parameter.
  • the second quantization information may be a second quantization parameter, or a second offset value of the second quantization parameter relative to the second reference quantization parameter.
  • encoding the quantized first information refers to encoding the second part of the geometric information in the first information and the quantized first part of the geometric information, and the second part of the geometric information is the second part of the first information.
  • the geometric information of the three-dimensional grid is quantified through the first quantization parameter, so that the spacing between the vertices of the three-dimensional grid is reduced after quantization, thereby reducing the spacing between the two-dimensional vertices after projection, thereby improving the three-dimensional
  • the compression efficiency of the geometric information of the grid, and the first part of the geometric information in the first information is quantized through the second quantization parameter, which can effectively control the number of bits of the high-precision geometric information, and thus effectively Control encoding quality.
  • the encoding end encodes the first quantization information, including:
  • the encoding end encodes the first quantization parameter
  • the encoding end obtains a first offset value of the first quantization parameter relative to a first reference quantization parameter, and the first reference quantization parameter is the same as the one set in the target group of pictures (Group Of Pictures, GOP).
  • the reference quantization parameter corresponding to the geometric information of the target three-dimensional grid, the target GOP is the GOP in which the video frame corresponding to the target three-dimensional network is located; the encoding end encodes the first offset value.
  • each GOP is set with a first reference quantization parameter, and each video frame in the GOP is offset on the first reference quantization parameter according to the temporal prediction structure to obtain a first offset value.
  • the first reference quantization parameter includes a first reference quantization coefficient.
  • the first quantized information can be encoded in various ways.
  • Direct entropy coding can be used to encode the first quantized information; other methods can also be used to encode the first quantized information. If entropy coding is used, zero-order exponential Golomb coding can be used; or context-based content adaptive entropy coding.
  • the method in the embodiment of this application also includes:
  • the encoded first quantization information is written in the sequence parameter set of the target code stream, and the target The code stream is obtained based on the first information of the target three-dimensional grid corresponding to each video frame;
  • the code stream corresponding to each video frame is written in the header of the code stream corresponding to the video
  • the encoded first quantization parameter corresponding to the frame, and the code stream corresponding to each video frame are obtained based on the first information of the target three-dimensional grid corresponding to the video frame.
  • the encoded first quantization parameter can be written in multiple positions of the code stream. If the entire video frame sequence uses the same first quantization parameter, the encoded first quantization parameter can be written in the sequence of the target code stream. In the parameter set, if each video frame uses a different first quantization parameter, then the encoded first quantization parameter is written in the header of the code stream corresponding to each video frame.
  • the encoding end encodes the second quantization information, including:
  • the encoding end encodes the second quantization parameter
  • the encoding end obtains a second offset value of the second quantization parameter relative to a second reference quantization parameter, which is set in the target picture group GOP and corresponds to the first part of the geometric information.
  • the reference quantization parameter is, the target GOP is the GOP in which the video frame corresponding to the target three-dimensional network is located; the encoding end encodes the second offset value.
  • each GOP is set with a second reference quantization parameter
  • each video frame in the GOP is offset on the second reference quantization parameter according to the temporal prediction structure to obtain a second offset value.
  • the second reference quantization parameter includes a second reference quantization coefficient.
  • the second quantized information can be encoded in various ways.
  • Direct entropy coding can be used to encode the second quantized information; other methods can also be used to encode the second quantized information. If entropy coding is used, zero-order exponential Golomb coding can be used; or context-based content adaptive entropy coding.
  • the method in the embodiment of this application also includes:
  • the encoded second quantization information is written in the sequence parameter set of the target code stream, and the target The code stream is obtained based on the first information of the target three-dimensional grid corresponding to each video frame;
  • the first part of the geometric information corresponding to at least two video frames in the video frame sequence uses different second quantization information
  • the encoded second quantized information corresponding to the frame, and the code stream corresponding to each video frame are obtained based on the first information of the target three-dimensional grid corresponding to the video frame.
  • the encoded second quantization parameter can be written in multiple positions of the code stream. If the entire video frame sequence uses the same second quantization parameter, the encoded second quantization parameter can be written in the sequence of the target code stream. In the parameter set, if each video frame uses a different second quantization parameter, the encoded second quantization parameter is written in the header of the code stream corresponding to each video frame.
  • the above-mentioned encoded first quantization parameter and the second quantization parameter may be located at different positions in the code stream.
  • the encoded first quantization parameter is located in the sequence parameter set of the target code stream
  • the encoded second quantization parameter is located in the sequence parameter set of the target code stream.
  • the quantization parameter is located at the head of the code stream corresponding to each video frame, or the encoded second quantization parameter is located in the sequence parameter set of the target code stream, and the encoded first quantization parameter is located at the head of the code stream corresponding to each video frame.
  • Header of course, the encoded first quantization parameter and the second quantization parameter can be located at the same position in the code stream, such as both in the sequence parameter set of the target code stream, or at the head of the code stream corresponding to the same video frame .
  • first quantized information and the second quantized information are information required by the decoding end for decoding
  • encoding the first quantized information and the second quantized information can facilitate the decoding end based on the first quantized information and the second quantized information.
  • the second quantized information is quickly decoded.
  • step 101 includes:
  • the encoding end quantizes each vertex in the target three-dimensional grid according to the first quantization parameter of each component to obtain first precision geometric information.
  • the first quantization parameter of each component can be flexibly set according to usage requirements; the first quantization parameter mainly includes quantization parameters on the three components of X, Y and Z directions.
  • step 101 Normally, for quantization that does not require high accuracy, only low-precision geometric information can be retained after quantization; while for quantization that requires higher accuracy, not only low-precision geometric information must be recorded during quantization, but also high-precision geometric information. , so that accurate grid recovery can be achieved during decoding.
  • the specific implementation of step 101 above should also include:
  • the encoding end obtains second precision geometric information based on the first precision geometric information and the first quantization parameter of each component.
  • the f 1 function in Formula 1 to Formula 3 is a quantization function.
  • the input of the quantization function is the coordinates of a certain dimension and the first quantization parameter of the dimension, and the output is the quantized coordinate value;
  • f 2 in Formula 4 to Formula 6 The input of the function is the original coordinate value, the quantized coordinate value and the first quantization parameter of the dimension (the first quantization parameter can also be described as the first quantization coefficient), and the output is a high-precision coordinate value.
  • the f 1 function can be calculated in a variety of ways.
  • a more common calculation method is as shown in Formula 7 to Formula 9, which is calculated by dividing the original coordinates of each dimension by the quantization parameter of that dimension.
  • / is the division operator, and the result of the division operation can be rounded in different ways, such as rounding, rounding down, rounding up, etc.
  • the implementation methods corresponding to Formula 7 to Formula 9 are as shown in Formula 10 to Formula 12, where * is the multiplication operator.
  • the f 1 function and f 2 function can be implemented using bit operations, such as Formula 13 to Formula 18:
  • the first quantization parameters QP x , QP y and QP z can be flexibly set.
  • the first quantization parameters of different components are not necessarily equal.
  • the correlation of the first quantization parameters of different components can be used to establish the relationship between QP x , QP y and QP z , and set different first quantization parameters for different components. parameters; secondly, the first quantization parameters of different spatial regions are not necessarily equal, and the quantization parameters can be adaptively set according to the sparsity of the vertex distribution in the local region.
  • the high-precision geometric information contains detailed information of the outline of the three-dimensional mesh.
  • the high-precision geometric information (x h , y h , z h ) can be further processed.
  • the importance of high-precision geometric information of vertices in different areas is different. For areas where vertices are sparsely distributed, the distortion of high-precision geometric information will not have a major impact on the visual effect of the three-dimensional mesh.
  • step 101 there may be multiple quantized points that completely coincide with the same position. That is to say, in this case, the specific implementation of the above step 101 should also include:
  • the encoding end determines the information of the supplementary point based on the geometric information of the target three-dimensional grid and the first precision geometric information.
  • the points with repeated low-precision geometric information are used as supplementary points and encoded separately.
  • the geometric information of supplementary points can also be divided into two parts: low-precision geometric information and high-precision geometric information. According to the application's requirements for compression distortion, you can choose to retain all supplementary points or only a part of them.
  • the high-precision geometric information of the supplementary points can also be further quantified, or the high-precision geometric information of only some points can be retained.
  • the final first information (hereinafter described as the first information) needs to be encoded to obtain the final code stream.
  • the specific implementation process of encoding the first information in the embodiment includes:
  • Step 1021 The encoding end processes the first information to obtain second information, where the second information includes at least one of a placeholder map and a geometric map;
  • Step 1022 The encoding end encodes the second information.
  • step 1021 will be described below from the perspective of different information. The process is explained below.
  • the first information includes first precision geometric information
  • step 1021 includes:
  • Step 10211 The encoding end divides the first precision geometric information into three-dimensional slices
  • the main step is to divide the low-precision geometric information into patches to obtain multiple three-dimensional patches;
  • the specific implementation method of this step is: the encoding end determines each element contained in the first-precision geometric information.
  • the projection plane of the vertices; the coding end performs slice division on the vertices contained in the first precision geometric information according to the projection plane; the coding end clusters the vertices contained in the first precision geometric information, and obtains Each piece after division.
  • the process of patch division mainly includes: first estimating the normal vector of each vertex, selecting the candidate projection plane with the smallest angle between the plane normal vector and the vertex normal vector as the projection plane of the vertex; then, according to the projection
  • the plane initially divides the vertices, and the vertices with the same and connected projection planes are composed into patches; finally, the fine division algorithm is used to optimize the clustering results to obtain the final three-dimensional patch (3D patch).
  • the 3D patch is projected onto the two-dimensional plane to obtain the 2D patch. .
  • the projection plane of each vertex is initially selected.
  • the normal vector of the candidate projection plane is Select the plane whose normal vector direction is closest to the vertex normal vector direction as the projection plane of the vertex.
  • the calculation process of plane selection is as shown in Equation 21:
  • the fine division process can use a grid-based algorithm to reduce the time complexity of the algorithm.
  • the grid-based fine division algorithm flow is shown in Figure 2, which specifically includes:
  • Step 201 Divide the (x, y, z) geometric coordinate space into voxels.
  • Step 202 Find filled voxels.
  • Filled voxels refer to voxels that contain at least one point in the grid.
  • Step 203 Calculate the smoothing score of each filled voxel on each projection plane, recorded as voxScoreSmooth.
  • the voxel smoothing score of the voxel on a certain projection plane is the number of points gathered to the projection plane through the initial segmentation process.
  • Step 204 use KD-Tree partitioning to find nearest filled voxels, denoted as nnFilledVoxels, that is, the nearest filled voxels of each filled voxel (within the search radius and/or limited to the maximum number of adjacent voxels).
  • Step 205 use the voxel smoothing score of the nearest neighbor filled voxel in each projection plane to calculate the smoothing score (scoreSmooth) of each filled voxel.
  • the calculation process is as shown in Equation 22:
  • Step 206 Calculate the normal score using the normal vector of the vertex and the normal vector of the candidate projection plane, recorded as scoreNormal.
  • the calculation process is as shown in Formula 23:
  • p is the index of the projection plane and i is the index of the vertex.
  • Step 207 use scoreSmooth and scoreNormal to calculate the final score of each voxel on each projection plane.
  • the calculation process is as shown in Equation 24:
  • i is the vertex index
  • p is the index of the projection plane
  • v is the voxel index where vertex i is located.
  • Step 208 Use the scores in step 207 to cluster the vertices to obtain finely divided patches.
  • Step 10212 The encoding end performs two-dimensional projection on the divided three-dimensional slice to obtain the two-dimensional slice;
  • this process is to project the 3D patch onto a two-dimensional plane to obtain a two-dimensional patch (2D patch).
  • Step 10213 The encoding end packages the two-dimensional slices to obtain two-dimensional image information
  • this step implements patch packing.
  • the purpose of patch packing is to arrange 2D patches on a two-dimensional image.
  • the basic principle of patch packing is to arrange patches on a two-dimensional image without overlapping or The pixel-free parts of the patch are partially overlapped and arranged on the two-dimensional image.
  • the patches are arranged more closely and have time domain consistency to improve coding performance.
  • the resolution of the 2D image is W ⁇ H
  • the minimum block size that defines the patch arrangement is T, which specifies the minimum distance between different patches placed on this 2D grid.
  • patches are inserted and placed on the 2D grid according to the non-overlapping principle.
  • Each patch occupies an area consisting of an integer number of TxT blocks.
  • the patches can choose a variety of different arrangement directions. For example, eight different arrangement directions can be adopted, as shown in Figure 3, including 0 degrees, 180 degrees, 90 degrees, 270 degrees and mirror images of the first four directions.
  • a patch arrangement method with temporal consistency is adopted.
  • a Group of Pictures (GOP)
  • all patches of the first frame are arranged in order from largest to smallest.
  • the temporal consistency algorithm is used to adjust the order of patches.
  • the patch information can be obtained based on the information in the process of obtaining the two-dimensional image information. After that, the patch information can be encoded and the patch information sub-stream can be obtained;
  • the patch information records the information of each step operation in the process of obtaining two-dimensional image.
  • the patch information includes: patch division information, patch projection plane information, and patch packing position information.
  • Step 10214 The encoding end obtains a first-precision placeholder map and a first-precision geometric map based on the two-dimensional image information.
  • the process of obtaining the placeholder map is mainly: using the patch arrangement information obtained by patch packing, setting the position of the vertex in the two-dimensional image to 1, and setting the remaining positions to 0 to obtain the placeholder map.
  • the process of obtaining the geometric map is mainly as follows: in the process of obtaining the 2D patch through projection, the distance from each vertex to the projection plane is saved. This distance is called depth.
  • the low-precision geometric map compression part is to compress each vertex in the 2D patch.
  • the depth value of the vertex Arrange to the position of the vertex in the placeholder map to obtain a low-precision geometric map.
  • the first information includes second precision geometric information.
  • step 1021 includes:
  • Step 10215 The encoding end obtains the arrangement order of the vertices contained in the first precision geometric information
  • Step 10216 The encoding end arranges the second-precision geometric information corresponding to the vertices contained in the first-precision geometric information in the two-dimensional image to generate a second-precision geometric map.
  • the high-precision geometric information is arranged in the original patch (raw patch), and the high-precision geometric information corresponding to the vertices in the low-precision geometric map is arranged in a two-dimensional image to obtain the raw patch, thereby generating a high-precision Accurate geometric drawings. It is mainly divided into three steps, as shown in Figure 4, including:
  • Step 401 Obtain the vertex arrangement order, scan the low-precision geometric map line by line from left to right, and use the scanning order of each vertex as the vertex arrangement order in the raw patch.
  • Step 402 generate raw patch.
  • a raw patch is a rectangular patch formed by arranging the three-dimensional coordinates of the vertices row by row as shown in Figure 5. According to the vertex arrangement order obtained in the first step, the high-precision geometric information of the vertices is arranged in order to obtain the high-precision geometric information raw patch.
  • Step 403 Place the high-precision geometric information in a two-dimensional image to generate a high-precision geometric map.
  • the encoding end will encode the first-precision geometric figure and the second-precision geometric figure to obtain the geometric figure sub-stream.
  • the first information includes supplementary point information.
  • step 1021 includes:
  • Step 10217 The encoding end arranges the third-precision geometric information of the supplementary points into the first original slice;
  • Step 10218 The encoding end arranges the fourth precision geometric information of the supplementary points into the second original slice in the same order as the first original slice;
  • Step 10219 The encoding end compresses the first original slice and the second original slice to obtain a geometric map of the supplementary points.
  • the low-precision part and the high-precision part of the geometric information of the supplementary points are encoded separately.
  • the low-precision geometric information of the supplementary points is arranged into a supplementary point low-precision raw patch in any order; then, the high-precision geometric information is arranged into a supplementary point high-precision raw patch in the same order as the supplementary point low-precision raw patch; finally , to compress supplementary point low-precision raw patches and high-precision raw patches, a variety of compression methods can be used. Among them, one method is to encode the values in the raw patch by run-length coding, entropy coding, etc.
  • the other method is to add the supplementary point low-precision raw patch to the blank area in the low-precision geometric map, and add the supplementary point high-precision
  • the raw patch adds the blank areas in the high-precision geometric map to obtain the geometric map of supplementary points.
  • the video-based three-dimensional grid geometric information encoding framework of the embodiment of this application is shown in Figure 6.
  • the overall encoding process is:
  • the three-dimensional grid is quantified, which may produce three parts: low-precision geometric information, high-precision geometric information and supplementary point information; for low-precision Geometric information uses projection to divide patches and arrange patches to generate patch sequence compression information (patch division information), placeholder images and low-precision geometric images; for possible high-precision geometric information, raw patches can be arranged.
  • the high-precision geometric figures can be separately encoded into a code stream, or the high-precision geometric figures can be filled into the low-precision geometric figures, and the low-precision geometric figures can be encoded to obtain One code stream); for possible supplementary points, the geometric information of the supplementary points can be divided into low-precision parts and high-precision parts, arranged in raw patches respectively, and separately encoded into one code stream, or the raw patch can be added to the geometric diagram ; Finally, the encoding patch sequence compresses information, placeholder images, and geometric images to obtain corresponding sub-code streams respectively, and mixes multiple sub-code streams to obtain the final output code stream.
  • this application provides an implementation method of encoding the geometric information of the three-dimensional grid.
  • the geometric information of the three-dimensional grid is quantified through the first quantization parameter, and the high-precision geometric information is encoded through the second quantization parameter. Further quantification is performed to encode the quantized information of different precisions respectively, thereby improving the compression efficiency of the three-dimensional mesh geometric information.
  • the execution subject may be an encoding device.
  • the encoding device performing the encoding method is taken as an example to illustrate the encoding device provided by the embodiment of the present application.
  • this embodiment of the present application provides an encoding device 700, which includes:
  • the first processing module 701 is configured to perform quantification processing on the geometric information of the target three-dimensional grid according to the first quantization parameter to obtain first information.
  • the first information includes at least one of the following: first precision geometric information, Second-precision geometric information and supplementary point information;
  • the supplementary point information includes fourth-precision geometric information of the supplementary point, and the fourth-precision geometric information is the three-dimensional coordinate information lost during the quantification process of the supplementary point;
  • the second processing module 702 is configured to perform quantization processing on the first part of the geometric information in the first information according to the second quantization parameter.
  • the first part of the geometric information includes the second precision geometric information and the fourth precision geometric information of the supplementary point. at least one of;
  • the first encoding module 703 is configured to encode the quantized first information and the quantized information.
  • the quantized information includes the first quantized information used to indicate the first quantization parameter and the second quantized parameter used to indicate the second quantization parameter.
  • second quantitative information is configured to encode the quantized first information and the quantized information.
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • the first encoding module 703 is used to encode the first quantization parameter
  • the first offset value of the first quantization parameter relative to the first reference quantization parameter which is the geometric information set in the target picture group GOP and the target three-dimensional grid.
  • the corresponding reference quantization parameter, the target GOP is the GOP in which the video frame corresponding to the target three-dimensional network is located; the first offset value is encoded.
  • the device of the embodiment of the present application also includes:
  • a first writing module configured to use the target three-dimensional grid corresponding to each video frame in the video frame sequence.
  • the encoded first quantization information is written in the sequence parameter set of the target code stream, which is obtained based on the first information of the target three-dimensional grid corresponding to each of the video frames.
  • the code stream corresponding to each video frame is written in the header of the code stream corresponding to the video
  • the encoded first quantized information corresponding to the frame, and the code stream corresponding to each video frame is obtained based on the first information of the target three-dimensional grid corresponding to the video frame.
  • the first encoding module 703 is used to encode the second quantization parameter
  • the target GOP is the GOP in which the video frame corresponding to the target three-dimensional network is located; the second offset value is encoded.
  • the device of the embodiment of the present application also includes:
  • the second writing module is configured to write the encoded code in the sequence parameter set of the target code stream when the first part of the geometric information corresponding to each video frame in the video frame sequence adopts the same second quantization information.
  • second quantization information the target code stream is obtained according to the first information of the target three-dimensional grid corresponding to each of the video frames;
  • the first part of the geometric information corresponding to at least two video frames in the video frame sequence uses different second quantization information
  • the encoded second quantized information corresponding to the frame, and the code stream corresponding to each video frame are obtained based on the first information of the target three-dimensional grid corresponding to the video frame.
  • the first processing module 701 is configured to quantize each vertex in the target three-dimensional mesh according to the first quantization parameter of each component to obtain the first precision geometry. information.
  • the first processing module 701 is also configured to obtain second precision geometric information based on the first precision geometric information and the first quantization parameter of each component.
  • the first processing module 701 is also configured to determine supplementary point information based on the geometric information of the target three-dimensional grid and the first precision geometric information.
  • the supplementary point information further includes at least one of the following:
  • the third precision geometric information of the supplementary point is the quantized three-dimensional coordinate information of the supplementary point.
  • the first encoding module 703 includes:
  • the third acquisition sub-module is used to process the quantized first information and acquire second information, where the second information includes at least one of a placeholder map and a geometric map;
  • the first encoding submodule is used to encode the second information.
  • the third acquisition sub-module includes:
  • a first dividing unit configured to divide the first precision geometric information into three-dimensional slices
  • the first acquisition unit is used to perform two-dimensional projection on the divided three-dimensional slices to obtain the two-dimensional slices;
  • a second acquisition unit used to package the two-dimensional slices and acquire two-dimensional image information
  • the third acquisition unit is configured to acquire a first-precision placeholder map and a first-precision geometric map according to the two-dimensional image information.
  • the third acquisition sub-module further includes:
  • the fourth acquisition unit is used to pack the two-dimensional slices in the second acquisition unit and acquire the two-dimensional image information, and acquire the slice information according to the information in the process of acquiring the two-dimensional image information;
  • the fifth acquisition unit is used to encode the slice information and obtain the slice information sub-stream.
  • the third acquisition sub-module includes:
  • the sixth acquisition unit is used to acquire the arrangement order of the vertices contained in the first precision geometric information
  • the seventh acquisition unit is used to arrange the second-precision geometric information corresponding to the vertices contained in the first-precision geometric information in a two-dimensional image to generate a second-precision geometric map.
  • the first encoding sub-module is used to encode the geometric map of the first precision and the geometric map of the second precision to obtain the geometric map sub-stream.
  • the third acquisition sub-module when the first information after quantization processing includes information of supplementary points, the third acquisition sub-module includes:
  • a first arrangement unit configured to arrange the third-precision geometric information of the supplementary points into a first original piece
  • a second arrangement unit configured to arrange the fourth precision geometric information of the supplementary points into a second original slice in the same arrangement order as the first original slice;
  • An eighth acquisition unit is used to compress the first original slice and the second original slice and acquire a geometric map of the supplementary point.
  • the above scheme quantifies the geometric information of the three-dimensional grid through the first quantization parameter, so that the spacing between the vertices of the three-dimensional grid is reduced after quantization, thereby reducing the spacing between the two-dimensional vertices after projection, thereby improving the accuracy of the three-dimensional grid.
  • the compression efficiency of geometric information, and the first part of the geometric information (ie, high-precision geometric information) in the first information is quantized through the second quantization parameter, which can effectively control the number of bits of the high-precision geometric information, thereby effectively controlling the encoding quality. .
  • This device embodiment corresponds to the above-mentioned encoding method embodiment.
  • Each implementation process and implementation manner of the above-mentioned method embodiment can be applied to this device embodiment, and can achieve the same technical effect.
  • the encoding device 800 includes: a processor 801, a network interface 802, and a memory 803.
  • the network interface 802 is, for example, a common public radio interface (CPRI).
  • CPRI common public radio interface
  • the encoding device 800 in the embodiment of the present application also includes: instructions or programs stored in the memory 803 and executable on the processor 801.
  • the processor 801 calls the instructions or programs in the memory 803 to execute the modules shown in Figure 7
  • the implementation method and achieve the same technical effect will not be repeated here to avoid repetition.
  • this embodiment of the present application also provides a decoding method, including:
  • Step 901 The decoder decodes the obtained code stream to obtain quantized information and first information.
  • the first information includes at least one of the following: first precision geometric information, second precision geometric information, and supplementary point information.
  • the information of the supplementary point includes the fourth precision geometric information of the supplementary point, and the fourth precision geometric information is the three-dimensional coordinate information lost during the quantization process of the supplementary point;
  • the quantification information includes the information used to indicate the third precision geometric information of the supplementary point.
  • the second quantization parameter is a quantization parameter that quantizes the first part of the geometric information in the first information, and the The first part of the geometric information includes at least one of second precision geometric information and fourth precision geometric information of the supplementary point.
  • Step 902 The decoder performs inverse quantization processing on the first information according to the quantization information to obtain the target three-dimensional grid.
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • the target three-dimensional grid can be understood as the three-dimensional grid corresponding to any video frame.
  • the geometric information of the target three-dimensional grid can be understood as the coordinates of the vertices in the three-dimensional grid. These coordinates usually refer to the three-dimensional coordinates. .
  • the decoder decodes the acquired code stream to obtain the quantized information and the first information, and can quickly perform inverse quantization processing on the first information based on the quantized information to obtain the target three-dimensional grid.
  • the first quantization parameter can be obtained directly by decoding the obtained code stream, or the first offset value of the first quantization parameter relative to the first reference quantization parameter can be obtained, and based on the first offset The value gets the first quantization parameter.
  • the obtaining the first quantization parameter includes:
  • the decoder obtains a first offset value of the first quantization parameter relative to a first reference quantization parameter, which is set in the target picture group GOP and corresponds to the geometric information of the target three-dimensional grid.
  • Baseline quantization parameter, the target GOP is the GOP where the video frame corresponding to the target three-dimensional network is located;
  • the first quantization parameter is obtained according to the first offset value and the first reference quantization parameter.
  • the obtaining the first quantified information includes:
  • the first quantization information adopted by the target three-dimensional grid corresponding to each video frame in the video frame sequence is the same, the first quantization information is obtained from the sequence parameter set of the target code stream, and the target code stream It is obtained by encoding the first information of the target three-dimensional grid corresponding to each video frame at the encoding end;
  • the header of the code stream corresponding to each video frame is obtained with the video frame.
  • the corresponding first quantization information and the code stream corresponding to each video frame are obtained by encoding the first information of the target three-dimensional grid corresponding to the video frame at the encoding end.
  • the decoding end obtains the second quantization parameter, including:
  • the decoding end obtains a second offset value of the second quantization parameter relative to a second reference quantization parameter, and the second
  • the reference quantization parameter is the reference quantization parameter corresponding to the first part of the geometric information set in the target picture group GOP, and the target GOP is the GOP in which the video frame corresponding to the target three-dimensional network is located;
  • the decoding end obtains the second quantization parameter according to the second offset value and the second reference quantization parameter.
  • the second quantization parameter can be directly obtained by decoding the obtained code stream, or the second offset value of the second quantization parameter relative to the second reference quantization parameter can be obtained, and based on the second offset Value gets the second quantization parameter.
  • the decoding end obtains the second quantization parameter, including:
  • the second quantization parameter adopted by the first part of the geometric information corresponding to each video frame in the video frame sequence is the same, the second quantization parameter is obtained from the sequence parameter set of the target code stream, and the target code stream is encoded Obtained by encoding the first information of the target three-dimensional grid corresponding to each video frame;
  • the first part of the geometric information corresponding to at least two video frames in the video frame sequence uses different second quantization parameters, obtain the header of the code stream corresponding to each video frame and the video frame.
  • the corresponding encoded second quantization parameter and the code stream corresponding to each video frame are obtained by encoding the first information of the target three-dimensional grid corresponding to the video frame at the encoding end.
  • geometric information reconstruction first requires decoding the first quantization parameter and the second quantization parameter in the code stream.
  • the corresponding first quantization parameter and the second quantization parameter are read at the corresponding position of the code stream, and the first quantization parameter and the second quantization parameter are obtained by using a decoding method corresponding to the encoding method. If the entire sequence uses the same quantization parameter, then it can be read in the sequence parameter set of the code stream; if the video frame uses different quantization parameters, it can be read at the header of each frame code stream; if for each GOP Set a baseline quantization parameter, and each frame offsets the baseline quantization parameter according to the time domain prediction structure, which can be read at the head of each frame of data.
  • the reading position of the first quantization parameter is similar to the reading position of the second quantization parameter, but they are not necessarily stored at the same position.
  • the decoding end performs inverse quantization processing on the first information according to the quantization information to obtain the target three-dimensional grid, including:
  • the decoder performs inverse quantization processing on the first part of the geometric information in the first information according to the second quantization information
  • the decoder performs inverse quantization processing on the quantized first information according to the first quantization parameter to obtain the target three-dimensional network.
  • the first part of the geometric information in the first information is dequantized first according to the second quantization parameter, and then the second part of the geometric information and the first part of the geometric information after inverse quantization are dequantized according to the first quantization parameter,
  • the second part of the geometric information is the information in the first information except the first part of the geometric information.
  • the decoding end decodes the obtained code stream and obtains the first information.
  • the specific implementation method includes:
  • the decoding end obtains a target sub-code stream according to the obtained code stream.
  • the target sub-code stream includes: a slice information sub-stream, a placeholder map sub-stream and a geometric map sub-stream;
  • the decoding end obtains second information according to the target sub-code stream, and the second information includes: at least one of a placeholder map and a geometric map;
  • the decoding end obtains the first information based on the second information.
  • obtaining the first information according to the second information includes:
  • the decoder acquires two-dimensional image information based on the first-precision placeholder map and the first-precision geometric map;
  • the decoding end obtains a two-dimensional slice according to the two-dimensional image information
  • the decoding end performs three-dimensional back-projection on the two-dimensional slice according to the slice information corresponding to the slice information sub-stream to obtain the three-dimensional slice;
  • the decoder acquires first precision geometric information based on the three-dimensional slice.
  • obtaining the first information according to the second information includes:
  • the decoder obtains second-precision geometric information based on the second-precision geometric map.
  • obtaining the first information based on the second information includes:
  • the decoding end determines the first original slice corresponding to the third precision geometric information of the supplementary point and the second original slice corresponding to the fourth precision geometric information of the supplementary point according to the geometric map of the supplementary point;
  • the decoding end determines the information of the supplementary point based on the first original slice and the second original slice.
  • the geometric information of the supplementary points is divided into low-precision parts and high-precision parts and is decoded separately.
  • the geometric map of the supplementary points is decompressed.
  • Various decompression methods can be used. Among them, one method is to decode the geometric map through run-length decoding, entropy decoding, etc.
  • the other method is to take the low-precision raw patch of supplementary points from the low-precision geometric map, and take the high-precision raw patch of supplementary points from the high-precision geometric map. extracted from the geometric diagram.
  • the low-precision geometric information of the supplementary points is obtained from the low-precision raw patch of the supplementary points in a specific order
  • the high-precision geometric information of the supplementary points is obtained from the high-precision raw patch of the supplementary points in a specific order;
  • the specific order is decoding
  • the end obtains it by parsing the code stream, that is, the encoding end uses which order to generate the supplementary point low-precision raw patch and the supplementary point high-precision raw patch, which will be informed to the decoder through the code stream.
  • the decoder performs inverse quantization processing on the quantized first information according to the first quantization parameter to obtain the target three-dimensional grid, including:
  • the decoder determines the coordinates of each vertex in the first precision geometry information based on the first precision geometry information and the first quantization parameter of each component.
  • the decoder performs inverse quantization processing on the quantized first information according to the first quantization parameter to obtain the target three-dimensional grid, which further includes:
  • the decoder determines the target three-dimensional grid based on the coordinates of each vertex in the target three-dimensional grid and the second precision geometric information.
  • the geometric information reconstruction process in the embodiment of the present application is a process of reconstructing a three-dimensional geometric model using information such as patch information, placeholder maps, low-precision geometric maps, and high-precision geometric maps.
  • the specific process is shown in Figure 10, which mainly includes the following four steps:
  • Step 1001 obtain 2D patch
  • obtaining a 2D patch refers to using the patch information to segment the placeholder information and depth information of the 2D patch from the placeholder map and geometric map.
  • the patch information contains the position and size of each 2D patch's bounding box in the placeholder map and low-precision geometric map.
  • the placeholder information and the 2D patch's placeholder information can be directly obtained using the patch information, placeholder map, and low-precision geometric map.
  • Low precision geometric information For high-precision geometric information, the vertex scanning sequence of the low-precision geometric map is used to correspond the high-precision geometric information in the high-precision raw patch to the vertices of the low-precision geometric map, thereby obtaining the high-precision geometric information of the 2D patch.
  • the low-precision geometric information and high-precision geometric information of supplementary points can be obtained by directly decoding the low-precision raw patch and high-precision raw patch of supplementary points.
  • Step 1002 reconstruct 3D patch
  • reconstructing a 3D patch refers to using the placeholder information and low-precision geometric information in the 2D patch to reconstruct the vertices in the 2D patch into a low-precision 3D patch.
  • the placeholder information of a 2D patch contains the position of the vertex relative to the coordinate origin in the local coordinate system of the patch projection plane, and the depth information contains the depth value of the vertex in the normal direction of the projection plane. Therefore, the 2D patch can be reconstructed into a low-precision 3D patch in the local coordinate system using the occupancy information and depth information.
  • Step 1003 reconstruct the low-precision geometric model
  • reconstructing a low-precision geometric model refers to using the reconstructed low-precision 3D patch to reconstruct the entire low-precision three-dimensional geometric model.
  • the patch information contains the conversion relationship of the 3D patch from the local coordinate system to the global coordinate system of the three-dimensional geometric model. Using the coordinate conversion relationship to convert all 3D patches to the global coordinate system, a low-precision three-dimensional geometric model is obtained.
  • the geometric information in the low-precision raw patch is directly used to obtain the low-precision coordinate values of the supplementary points in the global coordinate system, thereby obtaining a complete low-precision three-dimensional geometric model.
  • Step 1004 reconstruct the high-precision geometric model
  • the second quantization parameter is used to perform an inverse quantization process on the high-precision geometric information to obtain the high-precision geometric information.
  • the inverse quantization process is similar to the high-precision geometric model reconstruction process, but without residual information as a supplement. Then, the high-precision geometric model can be reconstructed using low-precision geometric information, high-precision geometric information, and decoded quantized parameters.
  • Reconstructing a high-precision geometric model refers to the process of using high-precision geometric information to reconstruct a high-precision geometric model based on a low-precision geometric model.
  • high-precision geometric information and low-precision geometric information are mapped, and the high-precision three-dimensional coordinates of the vertex can be reconstructed based on the high-precision geometric information and low-precision geometric information of the vertex.
  • the calculation process of high-precision three-dimensional coordinates (x r , y r , z r ) is as shown in Formula 25 to Formula 27:
  • the f 3 function is a reconstruction function.
  • the calculation process of the reconstruction function corresponds to the calculation process of the quantization function at the encoding end, and there are many ways to implement it. If the f 1 function adopts the implementation method of Formula 7 to Formula 12, then the reconstruction function is implemented as shown in Formula 28 to Formula 30:
  • the decoder performs inverse quantization processing on the quantized first information according to the first quantization parameter to obtain the target three-dimensional grid, which further includes:
  • the decoding end determines the target three-dimensional mesh using the information of the supplementary points and the coordinates of each vertex in the first precision geometric information.
  • the fourth precision geometric information in the supplementary point information is geometric information that has been quantized according to the second quantization parameter.
  • the supplementary point information also includes at least one of the following:
  • the third precision geometric information of the supplementary point is the quantized three-dimensional coordinate information of the supplementary point.
  • the video-based three-dimensional grid geometric information decoding framework of the embodiment of this application is shown in Figure 11.
  • the overall decoding process is:
  • the code stream is decomposed into a patch information sub-stream, a placeholder map sub-stream, and a geometric map sub-stream (it should be noted here that the geometric map sub-stream can include a code stream corresponding to a low-precision geometric map) and a code stream corresponding to a high-precision geometric figure, or the geometric figure sub-stream includes a code stream corresponding to a low-precision geometric figure filled with high-precision geometric figures), and are decoded respectively to obtain patch information and placeholder images.
  • geometric map the geometric information of low-precision mesh can be reconstructed using placeholder map and low-precision geometric map, and the geometric information of high-precision mesh can be reconstructed using placeholder map, low-precision geometric map and high-precision geometric map; finally, use The reconstructed geometric information and the connection relationship information obtained by other encoding and decoding methods are used to reconstruct the mesh.
  • the embodiment of the present application is a method embodiment of the opposite end corresponding to the embodiment of the above encoding method.
  • the decoding process is the inverse process of encoding. All the above implementation methods on the encoding side are applicable to the embodiment of the decoding end. The same technical effect can also be achieved, which will not be described again here.
  • this embodiment of the present application also provides a decoding device 1200, which includes:
  • the third processing module 1201 is used to decode the obtained code stream and obtain quantization information and first information.
  • the first information includes at least one of the following: first precision geometry information, second precision geometry information, and supplementary points.
  • the information of the supplementary point includes the fourth precision geometric information of the supplementary point, and the fourth precision geometric information is the three-dimensional coordinate information lost during the quantization process of the supplementary point;
  • the quantification information includes instructions for The first quantization parameter of A quantization information and second quantization information used to indicate a second quantization parameter.
  • the second quantization parameter is a quantization parameter that quantizes the first part of the geometric information in the first information.
  • the first part of the geometric information includes at least one of the second precision geometric information and the fourth precision geometric information of the supplementary point;
  • the fourth processing module 1202 is used to perform inverse quantization processing on the first information according to the quantized information to obtain the target three-dimensional grid;
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • the third processing module 1201 includes:
  • the first acquisition sub-module is used to acquire the first offset value of the first quantization parameter relative to the first reference quantization parameter.
  • the first reference quantization parameter is the difference between the target three-dimensional grid set in the target picture group GOP and the target three-dimensional grid.
  • the reference quantization parameter corresponding to the geometric information, the target GOP is the GOP where the video frame corresponding to the target three-dimensional network is located;
  • the second acquisition sub-module is used to acquire the first quantization parameter according to the first offset value and the first reference quantization parameter.
  • the third processing module 1201 is used to:
  • the first quantization information adopted by the target three-dimensional grid corresponding to each video frame in the video frame sequence is the same, the first quantization information is obtained from the sequence parameter set of the target code stream, and the target code stream It is obtained by encoding the first information of the target three-dimensional grid corresponding to each video frame at the encoding end;
  • the header of the code stream corresponding to each video frame is obtained with the video frame.
  • the corresponding first quantization information and the code stream corresponding to each video frame are obtained by encoding the first information of the target three-dimensional grid corresponding to the video frame at the encoding end.
  • the third processing module 1201 includes:
  • the third acquisition sub-module is used to acquire the second offset value of the second quantization parameter relative to the second reference quantization parameter.
  • the second reference quantization parameter is the same as the first partial geometry set in the target picture group GOP.
  • the reference quantization parameter corresponding to the information, the target GOP is the GOP where the video frame corresponding to the target three-dimensional network is located;
  • a fourth acquisition sub-module is used to acquire the second quantization parameter according to the second offset value and the second reference quantization parameter.
  • the third processing module 1201 is used to:
  • the second quantization parameter adopted by the first part of the geometric information corresponding to each video frame in the video frame sequence is the same, the second quantization parameter is obtained from the sequence parameter set of the target code stream, and the target code stream is encoded Obtained by encoding the first information of the target three-dimensional grid corresponding to each video frame;
  • the first part of the geometric information corresponding to at least two video frames in the video frame sequence uses different second quantization parameters, obtain the header of the code stream corresponding to each video frame and the video frame.
  • the corresponding encoded second quantization parameter, the code stream corresponding to each video frame is the encoding end’s first result of the target three-dimensional grid corresponding to the video frame.
  • the information is encoded.
  • the fourth processing module 1202 includes:
  • a first processing submodule configured to perform inverse quantization processing on the first part of the geometric information in the first information according to the second quantization information
  • the second processing submodule is used to perform inverse quantization processing on the quantized first information according to the first quantization parameter to obtain the target three-dimensional network.
  • the second processing module includes:
  • the fifth acquisition sub-module is used to acquire the target sub-code stream according to the acquired code stream.
  • the target sub-code stream includes: slice information sub-stream, placeholder map sub-stream and geometric map sub-stream;
  • the sixth acquisition sub-module is used to acquire second information according to the target sub-code stream, where the second information includes: at least one of a placeholder map and a geometric map;
  • the seventh acquisition sub-module is used to acquire the first information according to the second information.
  • the seventh acquisition sub-module includes:
  • the ninth acquisition unit is used to acquire two-dimensional image information based on the first-precision placeholder map and the first-precision geometric map;
  • the tenth acquisition unit is used to acquire two-dimensional slices according to the two-dimensional image information
  • An eleventh acquisition unit configured to perform three-dimensional back-projection of the two-dimensional slice according to the slice information corresponding to the slice information sub-stream, and obtain the three-dimensional slice;
  • the twelfth acquisition unit is used to acquire first precision geometric information according to the three-dimensional slice.
  • the seventh acquisition sub-module is configured to obtain the second-precision geometric information according to the second-precision geometric map.
  • the seventh acquisition sub-module includes:
  • a first determination unit configured to determine, according to the geometric map of the supplementary point, the first original slice corresponding to the third precision geometric information of the supplementary point and the second original slice corresponding to the fourth precision geometric information of the supplementary point;
  • a second determination unit is configured to determine supplementary point information based on the first original slice and the second original slice.
  • the third processing module 1201 is configured to determine each of the first precision geometric information based on the first precision geometric information and the first quantization parameter of each component. The coordinates of the vertex.
  • the third processing module 1201 is also configured to determine the target three-dimensional mesh according to the coordinates of each vertex in the target three-dimensional mesh and the second precision geometric information. grid.
  • the third processing module 1201 is also configured to use the information of the supplementary points and the coordinates of each vertex in the first precision geometric information to determine the target three-dimensional network. grid.
  • the supplementary point information also includes at least one of the following:
  • the third precision geometric information of the supplementary point which is the quantified three-dimensional coordinate information of the supplementary point. interest.
  • this device embodiment is a device corresponding to the above-mentioned method. All implementation methods in the above-mentioned method embodiment are applicable to this device embodiment and can achieve the same technical effect, which will not be described again here.
  • the embodiment of the present application also provides a decoding device, including a processor, a memory, and a program or instruction stored in the memory and executable on the processor.
  • a decoding device including a processor, a memory, and a program or instruction stored in the memory and executable on the processor.
  • the program or instruction is executed by the processor, the above-mentioned decoding device is implemented.
  • Each process of the decoding method embodiment can achieve the same technical effect. To avoid repetition, it will not be described again here.
  • the embodiment of the present application also provides a coding device, including a processor, a memory, and a program or instruction stored in the memory and executable on the processor.
  • a coding device including a processor, a memory, and a program or instruction stored in the memory and executable on the processor.
  • the program or instruction is executed by the processor, the above is implemented.
  • Each process of the decoding method embodiment can achieve the same technical effect. To avoid repetition, it will not be described again here.
  • Embodiments of the present application also provide a readable storage medium.
  • Programs or instructions are stored on the computer-readable storage medium.
  • the program or instructions are executed by a processor, each process of the above encoding method or decoding method embodiment is implemented, and can To achieve the same technical effect, to avoid repetition, we will not repeat them here.
  • the processor is the processor in the decoding device described in the above embodiment.
  • the readable storage medium includes computer readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
  • the computer-readable storage medium is such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • An embodiment of the present application also provides an encoding device, including a processor and a communication interface, wherein the processor is configured to perform quantization processing on the geometric information of the target three-dimensional grid according to the first quantization parameter to obtain the first information.
  • the first information includes at least one of the following: first precision geometry information, second precision geometry information, supplementary point information, the supplementary point information includes fourth precision geometry information of the supplementary point, and the fourth The precision geometric information is the three-dimensional coordinate information lost during the quantization process of the supplementary point; the encoding end performs quantization processing on the first part of the geometric information in the first information according to the second quantization parameter, and the first part of the geometric information includes the first part of the geometric information.
  • At least one of second-precision geometric information and fourth-precision geometric information of the supplementary point encoding the quantized first information and the quantized information, the quantized information including a first quantized parameter indicating the first quantized parameter.
  • Quantization information and second quantization information indicating a second quantization parameter;
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • This encoding device embodiment corresponds to the above-mentioned encoding method embodiment.
  • Each implementation process and implementation manner of the above-mentioned method embodiment can be applied to this encoding device embodiment, and can achieve the same technical effect.
  • Embodiments of the present application also provide a decoding device, including a processor and a communication interface, wherein the processor is used to decode the obtained code stream and obtain quantization information and first information.
  • the first information includes the following At least one item: first precision geometric information, second precision geometric information, and supplementary point information.
  • the supplementary point information includes fourth precision geometric information of the supplementary point.
  • the fourth precision geometric information is the supplementary point at The three dimensions lost during the quantization process Coordinate information;
  • the quantization information includes first quantization information used to indicate the first quantization parameter and second quantization information used to indicate the second quantization parameter, the second quantization parameter is a pair of the first information
  • the first precision geometric information is the geometric information after quantization of the target three-dimensional grid
  • the second precision geometric information is the geometric information lost during the quantization process of the target three-dimensional grid
  • the information of the supplementary point Information about points generated during the quantification process that require additional processing is the geometric information after quantization of the target three-dimensional grid
  • This decoding device embodiment corresponds to the above-mentioned decoding method embodiment.
  • Each implementation process and implementation manner of the above-mentioned method embodiment can be applied to this decoding device embodiment, and can achieve the same technical effect.
  • the embodiment of the present application also provides a decoding device.
  • the decoding device in the embodiment of the present application also includes: instructions or programs stored in the memory and executable on the processor.
  • the processor calls the instructions or programs in the memory to execute the method executed by each module shown in Figure 12, and To achieve the same technical effect, to avoid repetition, we will not repeat them here.
  • this embodiment of the present application also provides a communication device 1300, which includes a processor 1301 and a memory 1302.
  • the memory 1302 stores programs or instructions that can be run on the processor 1301, such as , when the communication device 1300 is a coding device, when the program or instruction is executed by the processor 1301, each step of the above coding method embodiment is implemented, and the same technical effect can be achieved.
  • the communication device 1300 is a decoding device, when the program or instruction is executed by the processor 1301, each step of the above decoding method embodiment is implemented and the same technical effect can be achieved. To avoid duplication, the details are not repeated here.
  • An embodiment of the present application further provides a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the above encoding method or decoding method.
  • Each process in the example can achieve the same technical effect. To avoid repetition, we will not repeat it here.
  • chips mentioned in the embodiments of this application may also be called system-on-chip, system-on-a-chip, system-on-chip or system-on-chip, etc.
  • Embodiments of the present application further provide a computer program/program product.
  • the computer program/program product is stored in a storage medium.
  • the computer program/program product is executed by at least one processor to implement the above encoding method or decoding method.
  • Each process of the embodiment can achieve the same technical effect, so to avoid repetition, it will not be described again here.
  • Embodiments of the present application also provide a communication system, which at least includes: an encoding device and a decoding device.
  • the encoding device can be used to perform the steps of the encoding method as described above.
  • the decoding device can be used to perform the decoding method as described above. A step of. And can achieve the same technical effect. To avoid repetition, they will not be described again here.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present application can be embodied in the form of a computer software product that is essentially or contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk , CD), including several instructions to cause a terminal (which can be a mobile phone, computer, server, air conditioner, or network device, etc.) to execute the methods described in various embodiments of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请公开了一种编码、解码方法、装置及设备,该编码方法包括:编码端根据第一量化参数,对目标三维网格的几何信息进行量化处理,得到第一信息,第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息;补充点的信息包括所述补充点的第四精度几何信息,第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;编码端根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;编码端对量化处理后的第一信息和量化信息进行编码,量化信息包括用于指示第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息。

Description

编码、解码方法、装置及设备
相关申请的交叉引用
本申请主张在2022年3月25日在中国提交的中国专利申请No.202210307829.0的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于编解码技术领域,具体涉及一种编码、解码方法、装置及设备。
背景技术
三维网格(Mesh)可以被认为是过去多年来最流行的三维模型的表示方法,其在许多应用程序中扮演着重要的角色。它的表示简便,因此被大量以硬件算法集成到电脑、平板电脑和智能手机的图形处理单元(Graphics Processing Unit,GPU)中,专门用于渲染三维网格。
由于Mesh的顶点与点云都是空间中一组无规则分布的离散点集,具有相似的特点。因此,三维网格几何信息可以用点云压缩算法进行压缩。但相比于点云,三维网格的顶点具有空间分布更加稀疏,更加不均匀的特点。使用点云压缩算法来压缩三维网格模型的几何信息,压缩效率并不高。
发明内容
本申请实施例提供一种编码、解码方法、装置及设备,能够解决现有技术的对于三维网格几何信息的压缩方式,存在压缩效率不高的问题。
第一方面,提供了一种编码方法,包括:
编码端根据第一量化参数,对所述目标三维网格的几何信息进行量化处理,得到第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息;所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;
所述编码端根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
所述编码端对量化处理后的第一信息和量化信息进行编码,所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度 几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
第二方面,提供了一种解码方法,包括:
解码端对获取的码流进行解码处理,获取量化信息和第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息,所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息,所述第二量化参数为对所述第一信息中的第一部分几何信息进行量化处理的量化参数,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
所述解码端根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
第三方面,提供了一种编码装置,包括:
第一处理模块,用于根据第一量化参数,对所述目标三维网格的几何信息进行量化处理,得到第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息;所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;
第二处理模块,用于根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
第一编码模块,用于对量化处理后的第一信息和量化信息进行编码,所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
第三方面,提供了一种解码装置,包括:
第三处理模块,用于对获取的码流进行解码处理,获取量化信息和第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息,所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息,所述第二量化参数为对所述第一信息中的第一部分几何信息进行量化处理的量化参数,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
第四处理模块,用于根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
第五方面,提供了一种编码设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第六方面,提供了一种编码设备,包括处理器及通信接口,其中,所述处理器用于根据第一量化参数,对所述目标三维网格的几何信息进行量化处理,得到第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息;所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;对量化处理后的第一信息和量化信息进行编码,所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
第七方面,提供了一种解码设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第二方面所述的方法的步骤。
第八方面,提供了一种解码设备,包括处理器及通信接口,其中,所述处理器用于对获取的码流进行解码处理,获取量化信息和第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息,所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息,所述第二量化参数为对所述第一信息中的第一部分几何信息进行量化处理的量化参数,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
第九方面,提供了一种通信系统,包括:编码设备和解码设备,所述编码设备可用于 执行如第一方面所述的方法的步骤,所述解码设备可用于执行如第二方面所述的方法的步骤。
第十方面,提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第十一方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法,或实现如第二方面所述的方法。
第十二方面,提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现如第一方面所述的方法的步骤,或实现如第二方面所述的方法的步骤。
在本申请实施例中,通过第一量化参数对三维网格的几何信息进行量化,使得量化后三维网格的顶点的间距减小,进而减小投影后二维顶点的间距,以此可以提高三维网格的几何信息的压缩效率,且通过第二量化参数对第一信息中的第一部分几何信息(即高精度几何信息)进行量化处理,能够有效控制高精度几何信息的比特数,进而能够有效控制编码质量。
附图说明
图1是本申请实施例的编码方法的流程示意图;
图2是基于网格的精细划分过程示意图;
图3是Patch排列的八种方向示意图;
图4是高精度几何信息的编码过程示意图;
图5是raw patch示意图;
图6是基于视频的三维网格几何信息编码框架示意图;
图7是本申请实施例的编码装置的模块示意图;
图8是本申请实施例的编码设备的结构示意图;
图9是本申请实施例的解码方法的流程示意图;
图10是几何信息重建框图;
图11是基于视频的三维网格几何信息解码框架示意图;
图12是本申请实施例的解码装置的模块示意图;
图13是本申请实施例的通信设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚描述,显 然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
值得指出的是,本申请实施例所描述的技术不限于长期演进型(Long Term Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,还可用于其他无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency Division Multiple Access,SC-FDMA)和其他系统。本申请实施例中的术语“系统”和“网络”常被可互换地使用,所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。以下描述出于示例目的描述了新空口(New Radio,NR)系统,并且在以下大部分描述中使用NR术语,但是这些技术也可应用于NR系统应用以外的应用,如第6代(6th Generation,6G)通信系统。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的编码方法进行详细地说明。
如图1所示,本申请实施例提供了一种编码方法,包括:
步骤101:编码端根据第一量化参数,对所述目标三维网格的几何信息进行量化处理,得到第一信息。
本申请实施例中,目标三维网格可以理解为任意视频帧对应的三维网格,该目标三维网格的几何信息可以理解为是三维网格中顶点的坐标,该坐标通常指的是三维坐标。
所述第一信息包括以下至少一项:
A11、第一精度几何信息;
需要说明的是,该第一精度几何信息可以理解为低精度几何信息,即低精度几何信息指的是目标三维网格量化后的几何信息,即量化后的目标三维网格包括的各顶点三维坐标信息。
A12、第二精度几何信息;
需要说明的是,该第二精度几何信息可以理解为高精度几何信息,高精度几何信息可以看作是量化过程中丢失的几何信息,即丢失的三维坐标信息。
A13、补充点的信息。
需要说明的是,补充点的信息是指量化过程中产生的需要额外处理的点的信息,也就 是说,所述补充点为量化过程中产生的需要额外处理的点,例如,坐标位置出现重叠的重复点等,通过对重复点进行处理,可以使得在量化中坐标位置重叠的顶点在反量化后恢复到原来的位置。
可选地,该补充点的信息,包括以下至少一项:
A131、补充点对应的第一精度几何信息中顶点的索引;
需要说明的是,通过标识索引,便可知道量化后的网格中,哪些点标识的是量化前的三维网格中的多个点,即量化前的三维网格中的多个点在量化后重合到了一起,通过顶点的索引便可确定补充点的低精度几何信息。
A132、补充点的第三精度几何信息;
需要说明的是,该第三精度几何信息可以理解为补充点的低精度几何信息,即补充点被量化后的三维坐标信息。
A133、补充点的第四精度几何信息;
需要说明的是,该第四精度几何信息可以理解为补充点的高精度几何信息,即补充点在被量化过程中丢失的三维坐标信息。
这里需要说明的是,在具体使用时,通过A131和A133或者通过A132和A133便可确定得到量化后隐藏的点有哪些。
步骤102:所述编码端根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息。
具体的,在有损模式下,可以对高精度信息和/或补充点的高精度信息进一步进行量化,从而达到在可接受的失真程度下有效提高压缩效率。
步骤103:所述编码端对量化处理后的第一信息和量化信息进行编码,所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息;
这里的第一量化信息可以是第一量化参数,也可以是第一量化参数相对于第一基准量化参数的第一偏移值。第二量化信息可以是第二量化参数,也可以是第二量化参数相对于第二基准量化参数的第二偏移值。
这里,对量化处理后的第一信息进行编码是指对第一信息中的第二部分几何信息和量化处理后的第一部分几何信息进行编码,该第二部分几何信息为所述第一信息中除所述第一部分几何信息之外的几何信息。本申请实施例中,通过第一量化参数对三维网格的几何信息进行量化,使得量化后三维网格的顶点的间距减小,进而减小投影后二维顶点的间距,以此可以提高三维网格的几何信息的压缩效率,且通过第二量化参数对第一信息中的第一部分几何信息(即高精度几何信息)进行量化处理,能够有效控制高精度几何信息的比特数,进而能够有效控制编码质量。
可选地,所述编码端对所述第一量化信息进行编码,包括:
所述编码端对所述第一量化参数进行编码;
或者,所述编码端获取所述第一量化参数相对于第一基准量化参数的第一偏移值,所述第一基准量化参数为目标画面组(Group Of Pictures,GOP)中设置的与所述目标三维网格的几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;所述编码端对所述第一偏移值进行编码。
具体的,每个GOP设置一个第一基准量化参数,该GOP中的每个视频帧根据时域预测结构在第一基准量化参数上进行偏移,得到第一偏移值。可选地,所述第一基准量化参数包括第一基准量化系数。
本申请实施例中,第一量化信息的编码可以采用多种方式,可以采用直接熵编码的方式对第一量化信息进行编码;也可以采用其他方法对第一量化信息进行编码。如果采用熵编码的话,可以采用零阶指数哥伦布编码;或者基于上下文的内容自适应熵编码。
可选地,本申请实施例的方法,还包括:
在视频帧序列中的每个视频帧对应的目标三维网格采用的所述第一量化信息相同的情况下,在目标码流的序列参数集中写入编码后的第一量化信息,所述目标码流是根据每个所述视频帧对应的目标三维网格的第一信息得到的;
或者,在视频帧序列中的至少两个视频帧对应的目标三维网格采用的所述第一量化信息不同的情况下,在每个视频帧对应的码流的头部写入与所述视频帧对应的编码后的第一量化参数,每个视频帧对应的码流是根据所述视频帧对应的目标三维网格的第一信息得到的。
例如,编码后的第一量化参数可以写在码流的多个位置,如果整个视频帧序列都采用同样的第一量化参数,则将编码后的第一量化参数写在上述目标码流的序列参数集中,如果每个视频帧采用不同的第一量化参数,则在每个视频帧对应的码流的头部写入编码后的第一量化参数。
可选地,本申请实施例的方法,所述编码端对所述第二量化信息进行编码,包括:
所述编码端对所述第二量化参数进行编码;
或者,所述编码端获取所述第二量化参数相对于第二基准量化参数的第二偏移值,所述第二基准量化参数为目标画面组GOP中设置的与所述第一部分几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;所述编码端对所述第二偏移值进行编码。
具体的,每个GOP设置一个第二基准量化参数,该GOP中的每个视频帧根据时域预测结构在第二基准量化参数上进行偏移,得到第二偏移值。可选地,所述第二基准量化参数包括第二基准量化系数。
本申请实施例中,第二量化信息的编码可以采用多种方式,可以采用直接熵编码的方式对第二量化信息进行编码;也可以采用其他方法对第二量化信息进行编码。如果采用熵编码的话,可以采用零阶指数哥伦布编码;或者基于上下文的内容自适应熵编码。
需要说明的是,对高精度几何信息量化的过程与几何信息量化过程类似,但是不需要保留量化产生的残差信息。
可选地,本申请实施例的方法,还包括:
在视频帧序列中的每个视频帧对应的第一部分几何信息采用的所述第二量化信息相同的情况下,在目标码流的序列参数集中写入编码后的第二量化信息,所述目标码流是根据每个所述视频帧对应的目标三维网格的第一信息得到的;
或者,在视频帧序列中的至少两个视频帧对应的第一部分几何信息采用的所述第二量化信息不同的情况下,在每个视频帧对应的码流的头部写入与所述视频帧对应的编码后的第二量化信息,每个视频帧对应的码流是根据所述视频帧对应的目标三维网格的第一信息得到的。
例如,编码后的第二量化参数可以写在码流的多个位置,如果整个视频帧序列都采用同样的第二量化参数,则将编码后的第二量化参数写在上述目标码流的序列参数集中,如果每个视频帧采用不同的第二量化参数,则在每个视频帧对应的码流的头部写入编码后的第二量化参数。
需要说明的是,上述编码后的第一量化参数和第二量化参数可以位于码流中的不同位置,例如,编码后的第一量化参数位于目标码流的序列参数集中,编码后的第二量化参数位于每个视频帧对应的码流的头部,或者,编码后的第二量化参数位于目标码流的序列参数集中,编码后的第一量化参数位于每个视频帧对应的码流的头部,当然,编码后的第一量化参数和第二量化参数可以位于码流中的相同位置,如均位于目标码流的序列参数集,或者位于同一个视频帧对应的码流的头部。
由于上述第一量化信息和第二量化信息是解码端进行解码时所需要的信息,因此,通过对上述第一量化信息和第二量化信息进行编码,能够便于解码端基于该第一量化信息和第二量化信息快速地进行解码。
可选地,上述步骤101的具体实现方式包括:
所述编码端根据每一分量的第一量化参数,对所述目标三维网格中的每一顶点进行量化,获取第一精度几何信息。
需要说明的是,每一分量的第一量化参数可以根据使用需求灵活设置;第一量化参数主要包括X向、Y向和Z向三个分量上的量化参数。
通常情况下,对于精度要求不高的量化,在量化后可以只保留低精度几何信息;而对于精度要求较高的量化,在量化时不仅要记录低精度几何信息,也需要记录高精度几何信息,以此在解码时能够实现精准的网格恢复,也就是说,上述的步骤101的具体实现方式还应当包括:
所述编码端根据所述第一精度几何信息以及所述每一分量的第一量化参数,获取第二精度几何信息。
例如,假设某顶点的三维坐标为(x,y,z),第一量化参数为(QPx,QPy,QPz),低精度 几何信息(xl,yl,zl)和高精度几何信息(xh,yh,zh)的计算过程如公式一至公式六所示:
公式一:xl=f1(x,QPx);
公式二:yl=f1(y,QPy);
公式三:zl=f1(z,QPz);
公式四:xh=f2(x,xl,QPx);
公式五:yh=f2(y,yl,QPy);
公式六:zh=f2(z,zl,QPz);
其中,公式一至公式三中的f1函数是量化函数,量化函数的输入为某一维度的坐标和该维度的第一量化参数,输出为量化后的坐标值;公式四至公式六中的f2函数输入为原始坐标值、量化后的坐标值以及该维度的第一量化参数(该第一量化参数也可描述为第一量化系数),输出为高精度的坐标值。
f1函数可以有多种计算方式,比较通用的一种计算方式如公式七至公式九所示,使用每个维度的原始坐标除以该维度的量化参数来计算。其中,/为除法运算符,对除法运算的结果可以采用不同的方式进行舍入,如四舍五入、向下取整、向上取整等。f2函数也存在多种计算方式,与公式七至公式九相对应的实现方式如公式十至公式十二所示,其中,*为乘法运算符。
公式七:xl=x/QPx
公式八:yl=y/QPy
公式九:zl=z/QPz
公式十:xh=x-xl*QPx
公式十一:yh=y-yl*QPy
公式十二:zh=z-zl*QPz
当量化参数为2的整数次幂时,f1函数和f2函数可以使用位运算实现,如公式十三至公式十八:
公式十三:xl=x>>log2QPx
公式十四:yl=y>>log2QPy
公式十五:zl=z>>log2QPz
公式十六:xh=x&(QPx-1);
公式十七:yh=y&(QPy-1);
公式十八:zh=z&(QPz-1);
值得注意的是,无论f1函数和f2函数采用哪种计算方式,第一量化参数QPx、QPy和QPz都可以灵活设置。首先,不同分量的第一量化参数并不一定相等,可以利用不同分量的第一量化参数的相关性,建立QPx、QPy和QPz之间的关系,为不同分量设置不同的第一量化参数;其次,不同空间区域的第一量化参数也不一定相等,可以根据局部区域顶点分布的稀疏程度自适应的设置量化参数。
需要说明的是,高精度几何信息包含的是三维网格的轮廓的细节信息。为了进一步提高压缩效率,可以对高精度几何信息(xh,yh,zh)进一步处理。在三维网格模型中,不同区域的顶点高精度几何信息的重要程度是不同的。对于顶点分布稀疏的区域,高精度几何信息的失真并不会对三维网格的视觉效果产生较大影响。这时为了提高压缩效率,可以选择对高精度几何信息进一步量化,或者只保留部分点的高精度几何信息。
可选地,在进行量化的过程中,可能会存在多个点量化完重合到同一个位置,也就是说,此种情况下,上述的步骤101的具体实现方式还应当包括:
所述编码端根据所述目标三维网格的几何信息和所述第一精度几何信息,确定补充点的信息。
也就是说,在得到所有顶点的低精度几何信息后,将低精度几何信息重复的点作为补充点,单独进行编码。补充点的几何信息同样可以分为低精度几何信息和高精度几何信息两部分,根据应用对压缩失真的要求,可以选择保留所有补充点或者只保留其中一部分补充点。对补充点的高精度几何信息,也可以进行进一步量化,或者只保留部分点的高精度几何信息。
需要说明的是,在对第一信息中第一部分几何信息进行进一步量化处理之后,需要对最终的第一信息(以下描述为第一信息)进行编码得到最终的码流,可选地,本申请实施例中对第一信息进行编码的具体实现过程包括:
步骤1021,所述编码端对所述第一信息进行处理,获取第二信息,所述第二信息包括占位图和几何图中的至少一项;
步骤1022,所述编码端对所述第二信息进行编码。
需要说明的是,因第一信息中包含的信息的种类不同,在对第一信息进行处理时,会分别对不同类的信息进行单独处理,下面分别从不同信息的角度,对步骤1021的实现过程说明如下。
一、所述第一信息包括第一精度几何信息
可选地,此种情况下,步骤1021的具体实现过程,包括:
步骤10211,所述编码端对所述第一精度几何信息进行三维片划分;
需要说明的是,此种情况下,主要是将低精度几何信息进行片(Patch)划分,得到多个三维片;此步骤的具体实现方式为:编码端确定第一精度几何信息中包含的每个顶点的投影平面;编码端根据所述投影平面对所述第一精度几何信息中所包含的顶点进行片划分;编码端对所述第一精度几何信息中所包含的顶点进行聚类,得到划分后的每一片。也就是说,对于Patch划分的过程主要包括:首先估计每个顶点的法向量,选择平面法向量与顶点法向量之间的夹角最小的候选投影平面作为该顶点的投影平面;然后,根据投影平面对顶点进行初始划分,将投影平面相同且连通的顶点组成patch;最后,使用精细划分算法优化聚类结果,得到最终的三维片(3D patch),将3D patch投影到二维平面得到2D patch。
下面对由第一精度几何信息得到三维片的过程的具体实现进行详细说明如下。
首先估计每个点的法向量。切线平面和它对应的法线是根据每个点的最近的邻居顶点m在一个预定义的搜索距离定义的。K-D树用于分离数据,并在点pi附近找到相邻点,该集合的重心用于定义法线。重心c的计算方法如下:
公式十九:
使用特征分解法估计顶点法向量,计算过程公式二十所示:
公式二十:
在初始划分阶段,初步选择每个顶点的投影平面。设顶点法向量的估计值为候选投影平面的法向量为选择法向量方向与顶点法向量方向最接近的平面作为该顶点的投影平面,平面选择的计算过程如公式二十一所示:
公式二十一:
精细划分过程可以采用基于网格的算法来降低算法的时间复杂度,基于网格的精细划分算法流程如图2所示,具体包括:
先设置循环次数(numlter)为0,判断循环次数是否小于最大循环次数(需要说明的是,该最大循环次数可以根据使用需求设置),若小于则执行下述过程:
步骤201,将(x,y,z)几何坐标空间划分为体素。
需要说明的是,此处的几何坐标空间指的是由量化得到的第一精度几何信息所构成的几何坐标空间。例如,对于使用体素大小为8的10位Mesh,每个坐标上的体素数量将是1024/8=128,此坐标空间中的体素总数将是128×128×128。
步骤202,查找填充体素,填充体素是指网格中包含至少有一个点的体素。
步骤203,计算每个填充体素在每个投影平面上的平滑分数,记为voxScoreSmooth,体素在某投影平面的体素平滑分数是通过初始分割过程聚集到该投影平面的点的数量。
步骤204,使用KD-Tree分区查找近邻填充体素,记为nnFilledVoxels,即每个填充体素(在搜索半径内和/或限制到最大数量的相邻体素)的最近的填充体素。
步骤205,使用近邻填充体素在每个投影平面的体素平滑分数,计算每个填充体素的平滑分数(scoreSmooth),计算过程如公式二十二所示:
公式二十二:
其中,p是投影平面的索引,v是近邻填充体素的索引。一个体素中所有点的scoreSmooth是相同的。
步骤206,使用顶点的法向量与候选投影平面的法向量计算法向分数,记为scoreNormal,计算过程如公式二十三所示:
公式二十三:scoreNormal[i][p]=normal[i]·orientation[p];
其中,p是投影平面的索引,i是顶点的索引。
步骤207,使用scoreSmooth和scoreNormal计算每个体素在各个投影平面上的最终分数,计算过程如公式二十四所示:
公式二十四:
其中,i为顶点索引,p为投影平面的索引,v是顶点i所在的体素索引。
步骤208,使用步骤207中的分数对顶点进行聚类,得到精细划分的patch。
多次迭代上述过程,直到得到较为准确的patch。
步骤10212,所述编码端将划分的三维片进行二维投影,获取二维片;
需要说的是,此过程是将3D patch投影到二维平面得到二维片(2D patch)。
步骤10213,所述编码端将所述二维片进行打包,获取二维图像信息;
需要说明的是,此步骤实现的是片打包(Patch packing),Patch packing的目的是将2D patch排列在一张二维图像上,Patch packing的基本原则是将patch不重叠的排列在二维图像上或者将patch的无像素部分进行部分重叠的排列在二维图像上,通过优先级排列、时域一致排列等算法,使patch排列的更加紧密,且具有时域一致性,提高编码性能。
假设,二维图像的分辨率为W×H,定义patch排列的最小块大小为T,它指定了放置在这个2D网格上的不同补丁之间的最小距离。
首先,patch按照不重叠的原则插入放置在2D网格上。每个patch占用由整数个TxT块组成的区域。此外,相邻patch之间要求至少有一个TxT块的距离。当没有足够的空间放置下一个patch时,图像的高度将变成原来的2倍,然后继续放置patch。
为了使patch排列的更加紧密,patch可以选择多种不同的排列方向。例如,可以采用八种不同的排列方向,如图3所示,包括0度、180度、90度、270度以及前四种方向的镜像。
为了获得更好的适应视频编码器帧间预测的特性,采用一种具有时域一致性的Patch排列方法。在一个画面组(Group of Pictures,GOP)中,第一帧的所有patch按照从大到小的顺序依次排列。对于GOP中的其他帧,使用时域一致性算法调整patch的排列顺序。
这里还需要说明的是,在得到二维图像信息后便能根据获取二维图像信息过程中的信息得到patch信息,之后便可以进行片信息的编码,获取片信息子码流;
这里需要说明的是,在进行二维图像信息过程中需要记录patch划分的信息、patch投影平面的信息以及patch packing位置的信息,所以patch信息记录的是获取二维图像过程中各步骤操作的信息,即patch信息包括:patch划分的信息、patch投影平面的信息以及patch packing位置的信息。
步骤10214,所述编码端根据所述二维图像信息,获取第一精度的占位图和第一精度的几何图。
需要说的是,对于获取占位图的过程,主要为:利用patch packing得到的patch排列信息,将二维图像中存在顶点的位置设为1,其余位置设为0,得到占位图。对于获取几何图的过程,主要为:在通过投影得到2D patch的过程中,保存了每个顶点到投影平面的距离,这个距离称为深度,低精度几何图压缩部分就是将2D patch中每个顶点的深度值, 排列到该顶点在占位图中的位置上,得到低精度几何图。
二、所述第一信息包括第二精度几何信息。
可选地,此种情况下,步骤1021的具体实现过程,包括:
步骤10215,所述编码端获取第一精度几何信息中所包含的顶点的排列顺序;
步骤10216,所述编码端将第一精度几何信息中所包含的顶点对应的第二精度几何信息排列在二维图像中,生成第二精度的几何图。
需要说明的是,高精度几何信息采用原始片(raw patch)的排列方式,将低精度几何图中的顶点对应的高精度几何信息排列在二维图像中,得到raw patch,以此便生成高精度几何图。主要分为三步,如图4所示,包括:
步骤401,获取顶点排列顺序,逐行从左向右扫描低精度几何图,将每个顶点的扫描顺序作为raw patch中顶点的排列顺序。
步骤402,生成raw patch。
需要说明的是,raw patch是将顶点的三维坐标按照如图5所示的方式逐行排列,形成的矩形patch。按照第一步中得到的顶点排列顺序,将顶点的高精度几何信息依次排列,得到高精度几何信息raw patch。
步骤403,将高精度几何信息放置在一张二维图像中,生成高精度几何图。
需要说明的是,在编码得到几何图子码流时,编码端是将对第一精度的几何图和第二精度的几何图进行编码,获取几何图子码流。
三、所述第一信息包括补充点的信息。
可选地,此种情况下,步骤1021的具体实现过程,包括:
步骤10217、所述编码端将所述补充点的第三精度几何信息排列成第一原始片;
步骤10218、所述编码端按照与所述第一原始片相同的排列顺序,将所述补充点的第四精度几何信息排列成第二原始片;
步骤10219、所述编码端对所述第一原始片和所述第二原始片进行压缩,获取补充点的几何图。
需要说明的是,本申请实施例中对于补充点的几何信息分为的低精度部分和高精度部分分别进行编码。首先,按照任意顺序将补充点的低精度几何信息排列成补充点低精度raw patch;然后,按照与补充点低精度raw patch相同的顺序将高精度几何信息排列成补充点高精度raw patch;最后,对补充点低精度raw patch和高精度raw patch进行压缩,可以采用多种压缩方法。其中,一种方法是对raw patch中的值进行游程编码、熵编码等方式编码,另一种方法是,将补充点低精度raw patch加入低精度几何图中的空白区域,将补充点高精度raw patch加入高精度几何图中的空白区域,得到补充点的几何图。
本申请实施例的基于视频的三维网格几何信息编码框架如图6所示,总体编码流程为:
首先,在量化之前可以选择是否对三维网格进行抽样简化;然后,对三维网格进行量化,由此可能会产生低精度几何信息、高精度几何信息和补充点信息三部分;对于低精度 几何信息,采用投影的方式进行patch划分、patch排列生成patch序列压缩信息(patch的划分信息)、占位图和低精度几何图;对于可能存在的高精度几何信息可以采用raw patch的排列方式,生成高精度几何图(这里需要说明的是,可以对高精度几何图单独编码成一路码流,或者,也可以将高精度几何图填充进低精度几何图中,对低精度几何图进行编码得到一路码流);对于可能存在的补充点,可以将补充点的几何信息分为低精度部分和高精度部分,分别进行raw patch排列,单独编码成一路码流,或者将raw patch加入几何图中;最后,编码patch序列压缩信息、占位图、几何图分别得到对应的子码流,并将多路子码流混流,得到最终输出码流。
需要说明的是,本申请给出了如何进行三维网格的几何信息进行编码的实现方式,通过第一量化参数对三维网格的几何信息进行量化,并通过第二量化参数对高精度几何信息进一步量化,分别对量化后的不同精度的信息进行编码,以此能提高三维网格几何信息的压缩效率。
本申请实施例提供的编码方法,执行主体可以为编码装置。本申请实施例中以编码装置执行编码方法为例,说明本申请实施例提供的编码装置。
如图7所示,本申请实施例提供一种编码装置700,包括:
第一处理模块701,用于根据第一量化参数,对所述目标三维网格的几何信息进行量化处理,得到第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息;所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;
第二处理模块702,用于根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
第一编码模块703,用于对量化处理后的第一信息和量化信息进行编码,所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
可选地,所述第一编码模块703用于对所述第一量化参数进行编码;
或者,用于获取所述第一量化参数相对于第一基准量化参数的第一偏移值,所述第一基准量化参数为目标画面组GOP中设置的与所述目标三维网格的几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;对所述第一偏移值进行编码。
可选地,本申请实施例的装置,还包括:
第一写入模块,用于在视频帧序列中的每个视频帧对应的目标三维网格采用的所述第 一量化信息相同的情况下,在目标码流的序列参数集中写入编码后的第一量化信息,所述目标码流是根据每个所述视频帧对应的目标三维网格的第一信息得到的;
或者,在视频帧序列中的至少两个视频帧对应的目标三维网格采用的所述第一量化信息不同的情况下,在每个视频帧对应的码流的头部写入与所述视频帧对应的编码后的第一量化信息,每个视频帧对应的码流是根据所述视频帧对应的目标三维网格的第一信息得到的。
可选地,本申请实施例的装置,所述第一编码模块703用于对所述第二量化参数进行编码;
或者,获取所述第二量化参数相对于第二基准量化参数的第二偏移值,所述第二基准量化参数为目标画面组GOP中设置的与所述第一部分几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;对所述第二偏移值进行编码。
可选地,本申请实施例的装置,还包括:
第二写入模块,用于在视频帧序列中的每个视频帧对应的第一部分几何信息采用的所述第二量化信息相同的情况下,在目标码流的序列参数集中写入编码后的第二量化信息,所述目标码流是根据每个所述视频帧对应的目标三维网格的第一信息得到的;
或者,在视频帧序列中的至少两个视频帧对应的第一部分几何信息采用的所述第二量化信息不同的情况下,在每个视频帧对应的码流的头部写入与所述视频帧对应的编码后的第二量化信息,每个视频帧对应的码流是根据所述视频帧对应的目标三维网格的第一信息得到的。
可选地,本申请实施例的装置,所述第一处理模块701用于根据每一分量的第一量化参数,对所述目标三维网格中的每一顶点进行量化,获取第一精度几何信息。
可选地,本申请实施例的装置,所述第一处理模块701还用于根据所述第一精度几何信息以及所述每一分量的第一量化参数,获取第二精度几何信息。
可选地,本申请实施例的装置,所述第一处理模块701还用于根据所述目标三维网格的几何信息和所述第一精度几何信息,确定补充点的信息。
可选地,本申请实施例的装置,所述补充点的信息,还包括以下至少一项:
补充点对应的第一精度几何信息中顶点的索引;
补充点的第三精度几何信息,所述第三精度几何信息为补充点被量化后的三维坐标信息。
可选地,本申请实施例的装置,所述第一编码模块703包括:
第三获取子模块,用于对量化处理后的第一信息进行处理,获取第二信息,所述第二信息包括占位图和几何图中的至少一项;
第一编码子模块,用于对所述第二信息进行编码。
可选地,本申请实施例的装置,在所述量化处理后的第一信息包括第一精度几何信息的情况下,所述第三获取子模块包括:
第一划分单元,用于对所述第一精度几何信息进行三维片划分;
第一获取单元,用于将划分的三维片进行二维投影,获取二维片;
第二获取单元,用于将所述二维片进行打包,获取二维图像信息;
第三获取单元,用于根据所述二维图像信息,获取第一精度的占位图和第一精度的几何图。
可选地,本申请实施例的装置,所述第三获取子模块还包括:
第四获取单元,用于在第二获取单元将所述二维片进行打包,获取二维图像信息之后,根据获取二维图像信息过程中的信息,获取片信息;
第五获取单元,用于对所述片信息进行编码,获取片信息子码流。
可选地,本申请实施例的装置,在所述量化处理后的第一信息包括第二精度几何信息的情况下,所述第三获取子模块包括:
第六获取单元,用于获取第一精度几何信息中所包含的顶点的排列顺序;
第七获取单元,用于将第一精度几何信息中所包含的顶点对应的第二精度几何信息排列在二维图像中,生成第二精度的几何图。
可选地,本申请实施例的装置,所述第一编码子模块用于对第一精度的几何图和第二精度的几何图进行编码,获取几何图子码流。
可选地,本申请实施例的装置,在所述量化处理后的第一信息包括补充点的信息的情况下,所述第三获取子模块包括:
第一排列单元,用于将所述补充点的第三精度几何信息排列成第一原始片;
第二排列单元,用于按照与所述第一原始片相同的排列顺序,将所述补充点的第四精度几何信息排列成第二原始片;
第八获取单元,用于对所述第一原始片和所述第二原始片进行压缩,获取补充点的几何图。
上述方案,通过第一量化参数对三维网格的几何信息进行量化,使得量化后三维网格的顶点的间距减小,进而减小投影后二维顶点的间距,以此可以提高三维网格的几何信息的压缩效率,且通过第二量化参数对第一信息中的第一部分几何信息(即高精度几何信息)进行量化处理,能够有效控制高精度几何信息的比特数,进而能够有效控制编码质量。
该装置实施例与上述编码方法实施例对应,上述方法实施例的各个实施过程和实现方式均可适用于该装置实施例中,且能达到相同的技术效果。
具体地,本申请实施例还提供了一种编码设备,如图8所示,该编码设备800包括:处理器801、网络接口802和存储器803。其中,网络接口802例如为通用公共无线接口(common public radio interface,CPRI)。
具体地,本申请实施例的编码设备800还包括:存储在存储器803上并可在处理器801上运行的指令或程序,处理器801调用存储器803中的指令或程序执行图7所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
如图9所示,本申请实施例还提供一种解码方法,包括:
步骤901:解码端对获取的码流进行解码处理,获取量化信息和第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息,所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息,所述第二量化参数为对所述第一信息中的第一部分几何信息进行量化处理的量化参数,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项。
步骤902:所述解码端根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格。
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
本申请实施例中,目标三维网格可以理解为任意视频帧对应的三维网格,该目标三维网格的几何信息可以理解为是三维网格中顶点的坐标,该坐标通常指的是三维坐标。
上述第一信息已在编码端的方法实施例中进行详细说明,此处不再赘述。
本申请实施例中,解码端对获取的码流进行解码处理,获取量化信息和第一信息,并根据该量化信息能够快速地对第一信息进行反量化处理,得到目标三维网格。
本申请实施例中,可通过对获取的码流解码直接获取第一量化参数,也可获取与第一量化参数相对于第一基准量化参数的第一偏移值,并基于该第一偏移值获取第一量化参数。
可选地,所述获取第一量化参数包括:
所述解码端获取第一量化参数相对于第一基准量化参数的第一偏移值,所述第一基准量化参数为目标画面组GOP中设置的与所述目标三维网格的几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;
根据所述第一偏移值和所述第一基准量化参数,获取所述第一量化参数。
可选地,所述获取第一量化信息包括:
在视频帧序列中的每个视频帧对应的目标三维网格采用的所述第一量化信息相同的情况下,在目标码流的序列参数集中获取所述第一量化信息,所述目标码流是编码端对每个所述视频帧对应的目标三维网格的第一信息进行编码得到的;
或者,在视频帧序列中的至少两个视频帧对应的目标三维网格采用的所述第一量化信息不同的情况下,在每个视频帧对应的码流的头部获取与所述视频帧对应的第一量化信息,每个视频帧对应的码流是编码端对所述视频帧对应的目标三维网格的第一信息进行编码得到的。
可选地,所述解码端获取第二量化参数,包括:
所述解码端获取所述第二量化参数相对于第二基准量化参数的第二偏移值,所述第二 基准量化参数为目标画面组GOP中设置的与所述第一部分几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;
所述解码端根据所述第二偏移值和所述第二基准量化参数,获取所述第二量化参数。
本申请实施例中,可通过对获取的码流解码直接获取第二量化参数,也可获取与第二量化参数相对于第二基准量化参数的第二偏移值,并基于该第二偏移值获取第二量化参数。
可选地,所述解码端获取第二量化参数,包括:
在视频帧序列中的每个视频帧对应的第一部分几何信息采用的所述第二量化参数相同的情况下,在目标码流的序列参数集中获取第二量化参数,所述目标码流是编码端对每个所述视频帧对应的目标三维网格的第一信息进行编码得到的;
或者,在视频帧序列中的至少两个视频帧对应的第一部分几何信息采用的所述第二量化参数不同的情况下,在每个视频帧对应的码流的头部获取与所述视频帧对应的编码后的第二量化参数,每个视频帧对应的码流是编码端对所述视频帧对应的目标三维网格的第一信息进行编码得到的。
本申请实施例中,几何信息重建首先需要在码流中解码第一量化参数以及第二量化参数。在码流的对应位置读取相应的第一量化参数和第二量化参数,利用与编码方法对应的解码方法得到第一量化参数和第二量化参数。如果整个序列都采用同样的量化参数,这时可以在码流的序列参数集中读取;如果视频帧采用不同的量化参数,可以在每一帧码流的头部读取;如果为每个GOP设置一个基准量化参数,每一帧根据时域预测结构在基准量化参数上进行偏移,那么可以在每一帧数据的头部读取。第一量化参数的读取位置与第二量化参数的读取位置类似,但二者不一定存放在相同的位置。
可选地,所述解码端根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格,包括:
所述解码端根据所述第二量化信息,对所述第一信息中的第一部分几何信息进行反量化处理;
所述解码端根据所述第一量化参数,对量化处理后的第一信息进行反量化处理,获取目标三维网络。
这里,先根据第二量化参数对第一信息中的第一部分几何信息进行反量化处理,再根据第一量化参数对第二部分几何信息和反量化处理后的第一部分几何信息进行反量化处理,所述第二部分几何信息为所述第一信息中除所述第一部分几何信息之外的信息。
可选地,所述解码端对获取的码流进行解码处理,获取第一信息的具体实现方式,包括:
所述解码端根据获取的码流,获取目标子码流,所述目标子码流包括:片信息子码流、占位图子码流和几何图子码流;
所述解码端根据所述目标子码流,获取第二信息,所述第二信息包括:占位图和几何图中的至少一项;
所述解码端根据所述第二信息,获取所述第一信息。
可选地,在所述第一信息包括第一精度几何信息的情况下,所述根据所述第二信息,获取第一信息,包括:
所述解码端根据第一精度的占位图和第一精度的几何图,获取二维图像信息;
所述解码端根据所述二维图像信息,获取二维片;
所述解码端根据所述片信息子码流对应的片信息对所述二维片进行三维逆投影,获取三维片;
所述解码端根据所述三维片,获取第一精度几何信息。
可选地,在所述第一信息包括第二精度几何信息的情况下,所述根据所述第二信息,获取第一信息,包括:
所述解码端根据第二精度的几何图,获取第二精度几何信息。
可选地,在所述第一信息包括补充点的信息的情况下,所述根据所述第二信息,获取第一信息,包括:
所述解码端根据补充点的几何图,确定所述补充点的第三精度几何信息对应的第一原始片以及所述补充点的第四精度几何信息对应的第二原始片;
所述解码端根据所述第一原始片和所述第二原始片,确定补充点的信息。
需要说明的是,本申请实施例中对于补充点的几何信息分为的低精度部分和高精度部分分别进行解码。首先,对补充点的几何图进行解压缩,可以采用多种解压缩方法。其中,一种方法是对几何图进行游程解码、熵解码等方式解码,另一种方法是,将补充点低精度raw patch从低精度几何图中取出,将补充点高精度raw patch从高精度几何图中取出。然后,按照特定顺序从补充点低精度raw patch中获取补充点的低精度几何信息,按照特定顺序从补充点高精度raw patch中获取高精度几何信息;这里需要说明的是,该特定顺序是解码端通过解析码流得到的,即编码端采用何种顺序生成补充点低精度raw patch和补充点高精度raw patch是会通过码流告知解码端的。
可选地,所述解码端根据所述第一量化参数,对量化处理后的所述第一信息进行反量化处理,获取目标三维网格,包括:
所述解码端根据所述第一精度几何信息以及每一分量的第一量化参数,确定所述第一精度几何信息中的每一顶点的坐标。
可选地,所述解码端根据所述第一量化参数,对量化处理后的所述第一信息进行反量化处理,获取目标三维网格,还包括:
所述解码端根据所述目标三维网格中的每一顶点的坐标以及所述第二精度几何信息,确定所述目标三维网格。
需要说明的是,本申请实施例中的几何信息重建过程是利用patch信息、占位图、低精度几何图和高精度几何图等信息,重建三维几何模型的过程。具体过程如图10所示,主要包括以下四步:
步骤1001,获取2D patch;
需要说明的是,获取2D patch是指利用patch信息从占位图和几何图中分割出2D patch的占位信息和深度信息。Patch信息中包含了每个2D patch的包围盒在占位图和低精度几何图中的位置和大小,利用patch信息、占位图和低精度几何图可以直接获取到2D patch的占位信息和低精度几何信息。对于高精度几何信息,利用低精度几何图的顶点扫描顺序,将高精度raw patch中的高精度几何信息与低精度几何图顶点进行对应,从而得到2D patch的高精度几何信息。对于补充点的几何信息,直接解码补充点的低精度raw patch和高精度raw patch即可获得补充点的低精度几何信息和高精度几何信息。
步骤1002,重建3D patch;
需要说明的是,重建3D patch是指利用2D patch中的占位信息和低精度几何信息,将2D patch中的顶点重建为低精度3D patch。2D patch的占位信息中包含了顶点在patch投影平面局部坐标系中相对于坐标原点的位置,深度信息包含了顶点在投影平面法线方向上的深度值。因此,利用占位信息和深度信息可以在局部坐标系中将2D patch重建为低精度3D patch。
步骤1003,重建低精度几何模型;
需要说明的是,重建低精度几何模型是指利用重建的低精度3D patch,重建整个低精度三维几何模型。Patch信息中包含了3D patch由局部坐标系转换成三维几何模型全局坐标系的转换关系,利用坐标转换关系将所有的3D patch转换到全局坐标系下,就得到了低精度三维几何模型。此外,对于补充点,直接利用低精度raw patch中的几何信息,得到补充点在全局坐标系下的低精度坐标值,从而得到完整的低精度三维几何模型。
步骤1004,重建高精度几何模型;
利用第二量化参数对高精度几何信息执行反量化过程,得到高精度几何信息。反量化过程与高精度几何模型重建过程类似,但是没有残差信息作为补充。然后,利用低精度几何信息、高精度几何信息以及解码得到的量化参数可以重建高精度几何模型。
重建高精度几何模型是指在低精度几何模型的基础上,利用高精度几何信息,重建高精度几何模型的过程。在获取2D patch的过程中,将高精度几何信息与低精度几何信息进行了对应,根据顶点的高精度几何信息和低精度几何信息可以重建出顶点的高精度三维坐标。根据应用的要求,可以选择重建全部顶点的高精度三维坐标,也可以重建部分顶点的高精度三维坐标。高精度三维坐标(xr,yr,zr)的计算过程,如公式二十五至公式二十七所示:
公式二十五:xr=f3(xl,xh,QPx);
公式二十六:yr=f3(yl,yh,QPy);
公式二十七:zr=f3(zl,zh,QPz);
f3函数是重建函数,重建函数的计算过程与编码端量化函数的计算过程相对应,有多种实现方式。如果f1函数采用式公式七至公式十二的实现方式,则重建函数的实现方式如公式二十八至公式三十所示:
公式二十八:xr=xl*QPx+xh
公式二十九:yr=yl*QPy+yh
公式三十:zr=zl*QPz+zh
如果f1函数采用公式十三至公式十八的实现方式,则重建函数的实现方式如公式三十一至公式三十三所示:
公式三十一:xr=(xl<<log2QPx)|xh
公式三十二:yr=(yl<<log2QPy)|yh
公式三十三:zr=(zl<<log2QPz)|zh
可选地,所述解码端根据所述第一量化参数,对量化处理后的所述第一信息进行反量化处理,获取目标三维网格,还包括:
所述解码端将所述补充点的信息以及所述第一精度几何信息中的每一顶点的坐标,确定所述目标三维网格。
这里,补充点的信息中的第四精度几何信息是根据第二量化参数进行量化处理后的几何信息。
可选地,所述补充点的信息,还包括以下至少一项:
补充点对应的第一精度几何信息中顶点的索引;
补充点的第三精度几何信息,所述第三精度几何信息为补充点被量化后的三维坐标信息。
本申请实施例的基于视频的三维网格几何信息解码框架如图11所示,总体解码流程为:
首先,将码流分解成patch信息子码流、占位图子码流、几何图子码流(这里需要说明的是,该几何图子码流中可以包括低精度几何图对应的一路码流以及高精度几何图对应的一路码流,或者,该几何图子码流中包括填充有高精度几何图的低精度几何图对应的一路码流),并分别进行解码得到patch信息、占位图、几何图;使用占位图、低精度几何图可以重建低精度网格的几何信息,使用占位图、低精度几何图以及高精度几何图可以重建高精度网格的几何信息;最终,使用重建的几何信息以及其他编解码方式得到的连接关系等信息重建网格。
需要说明的是,本申请实施例是与上述编码方法的实施例对应的对端的方法实施例,解码过程为编码的反过程,上述编码侧的所有实现方式均适用于该解码端的实施例中,也能达到与之相同的技术效果,在此不再赘述。
如图12所示,本申请实施例还提供一种解码装置1200,包括:
第三处理模块1201,用于对获取的码流进行解码处理,获取量化信息和第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息,所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;所述量化信息包括用于指示所述第一量化参数的第 一量化信息和用于指示第二量化参数的第二量化信息,所述第二量化参数为对所述第一信息中的第一部分几何信息进行量化处理的量化参数,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
第四处理模块1202,用于根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
可选地,本申请实施例的装置,所述第三处理模块1201包括:
第一获取子模块,用于获取第一量化参数相对于第一基准量化参数的第一偏移值,所述第一基准量化参数为目标画面组GOP中设置的与所述目标三维网格的几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;
第二获取子模块,用于根据所述第一偏移值和所述第一基准量化参数,获取所述第一量化参数。
可选地,本申请实施例的装置,所述第三处理模块1201用于:
在视频帧序列中的每个视频帧对应的目标三维网格采用的所述第一量化信息相同的情况下,在目标码流的序列参数集中获取所述第一量化信息,所述目标码流是编码端对每个所述视频帧对应的目标三维网格的第一信息进行编码得到的;
或者,在视频帧序列中的至少两个视频帧对应的目标三维网格采用的所述第一量化信息不同的情况下,在每个视频帧对应的码流的头部获取与所述视频帧对应的第一量化信息,每个视频帧对应的码流是编码端对所述视频帧对应的目标三维网格的第一信息进行编码得到的。
可选地,本申请实施例的装置,所述第三处理模块1201包括:
第三获取子模块,用于获取所述第二量化参数相对于第二基准量化参数的第二偏移值,所述第二基准量化参数为目标画面组GOP中设置的与所述第一部分几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;
第四获取子模块,用于根据所述第二偏移值和所述第二基准量化参数,获取所述第二量化参数。
可选地,本申请实施例的装置,所述第三处理模块1201用于:
在视频帧序列中的每个视频帧对应的第一部分几何信息采用的所述第二量化参数相同的情况下,在目标码流的序列参数集中获取第二量化参数,所述目标码流是编码端对每个所述视频帧对应的目标三维网格的第一信息进行编码得到的;
或者,在视频帧序列中的至少两个视频帧对应的第一部分几何信息采用的所述第二量化参数不同的情况下,在每个视频帧对应的码流的头部获取与所述视频帧对应的编码后的第二量化参数,每个视频帧对应的码流是编码端对所述视频帧对应的目标三维网格的第一 信息进行编码得到的。
可选地,本申请实施例的装置,所述第四处理模块1202包括:
第一处理子模块,用于根据所述第二量化信息,对所述第一信息中的第一部分几何信息进行反量化处理;
第二处理子模块,用于根据所述第一量化参数,对量化处理后的第一信息进行反量化处理,获取目标三维网络。
可选地,本申请实施例的装置,所述第二处理模块包括:
第五获取子模块,用于根据获取的码流,获取目标子码流,所述目标子码流包括:片信息子码流、占位图子码流和几何图子码流;
第六获取子模块,用于根据所述目标子码流,获取第二信息,所述第二信息包括:占位图和几何图中的至少一项;
第七获取子模块,用于根据所述第二信息,获取所述第一信息。
可选地,本申请实施例的装置,在所述第一信息包括第一精度几何信息的情况下,所述第七获取子模块包括:
第九获取单元,用于根据第一精度的占位图和第一精度的几何图,获取二维图像信息;
第十获取单元,用于根据所述二维图像信息,获取二维片;
第十一获取单元,用于根据所述片信息子码流对应的片信息对所述二维片进行三维逆投影,获取三维片;
第十二获取单元,用于根据所述三维片,获取第一精度几何信息。
可选地,本申请实施例的装置,在所述第一信息包括第二精度几何信息的情况下,所述第七获取子模块用于根据第二精度的几何图,获取第二精度几何信息。
可选地,本申请实施例的装置,在所述第一信息包括补充点的信息的情况下,所述第七获取子模块包括:
第一确定单元,用于根据补充点的几何图,确定所述补充点的第三精度几何信息对应的第一原始片以及所述补充点的第四精度几何信息对应的第二原始片;
第二确定单元,用于根据所述第一原始片和所述第二原始片,确定补充点的信息。
可选地,本申请实施例的装置,所述第三处理模块1201用于根据所述第一精度几何信息以及每一分量的第一量化参数,确定所述第一精度几何信息中的每一顶点的坐标。
可选地,本申请实施例的装置,所述第三处理模块1201还用于根据所述目标三维网格中的每一顶点的坐标以及所述第二精度几何信息,确定所述目标三维网格。
可选地,本申请实施例的装置,所述第三处理模块1201还用于将所述补充点的信息以及所述第一精度几何信息中的每一顶点的坐标,确定所述目标三维网格。
可选地,所述补充点的信息,还包括以下至少一项:
补充点对应的第一精度几何信息中顶点的索引;
补充点的第三精度几何信息,所述第三精度几何信息为补充点被量化后的三维坐标信 息。
需要说明的是,该装置实施例是与上述方法对应的装置,上述方法实施例中的所有实现方式均适用于该装置实施例中,也能达到相同的技术效果,在此不再赘述。
优选的,本申请实施例还提供一种解码设备,包括处理器,存储器,存储在存储器上并可在所述处理器上运行的程序或指令,该程序或指令被处理器执行时实现上述的解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
优选的,本申请实施例还提供一种编码设备,包括处理器,存储器,存储在存储器上并可在所述处理器上运行的程序或指令,该程序或指令被处理器执行时实现上述的解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供一种可读存储介质,计算机可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述的编码方法或解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的解码设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例还提供了一种编码设备,包括处理器及通信接口,其中,所述处理器用于根据第一量化参数,对所述目标三维网格的几何信息进行量化处理,得到第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;所述编码端根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;对量化处理后的第一信息和量化信息进行编码,所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
该编码设备实施例是与上述编码方法实施例对应的,上述方法实施例的各个实施过程和实现方式均可适用于该编码设备实施例中,且能达到相同的技术效果。
本申请实施例还提供了一种解码设备,包括处理器及通信接口,其中,所述处理器用于对获取的码流进行解码处理,获取量化信息和第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息,所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维 坐标信息;所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息,所述第二量化参数为对所述第一信息中的第一部分几何信息进行量化处理的量化参数,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格;
其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
该解码设备实施例是与上述解码方法实施例对应的,上述方法实施例的各个实施过程和实现方式均可适用于该解码设备实施例中,且能达到相同的技术效果。
具体地,本申请实施例还提供了一种解码设备。具体地,本申请实施例的解码设备还包括:存储在存储器上并可在处理器上运行的指令或程序,处理器调用存储器中的指令或程序执行图12所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
可选的,如图13所示,本申请实施例还提供一种通信设备1300,包括处理器1301和存储器1302,存储器1302上存储有可在所述处理器1301上运行的程序或指令,例如,该通信设备1300为编码设备时,该程序或指令被处理器1301执行时实现上述编码方法实施例的各个步骤,且能达到相同的技术效果。该通信设备1300为解码设备时,该程序或指令被处理器1301执行时实现上述解码方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述编码方法或解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
本申请实施例另提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现上述编码方法或解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供了一种通信系统,至少包括:编码设备和解码设备,所述编码设备可用于执行如上所述的编码方法的步骤,所述解码设备可用于执行如上所述的解码方法的步骤。且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除 在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (36)

  1. 一种编码方法,包括:
    编码端根据第一量化参数,对目标三维网格的几何信息进行量化处理,得到第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息;所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;
    所述编码端根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
    所述编码端对量化处理后的第一信息和量化信息进行编码,所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息;
    其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
  2. 根据权利要求1所述的方法,其中,所述编码端对所述第一量化信息进行编码,包括:
    所述编码端对所述第一量化参数进行编码;
    或者,所述编码端获取所述第一量化参数相对于第一基准量化参数的第一偏移值,所述第一基准量化参数为目标画面组GOP中设置的与所述目标三维网格的几何信息对应的基准量化参数,目标GOP为所述目标三维网络对应的视频帧所在的GOP;所述编码端对所述第一偏移值进行编码。
  3. 根据权利要求1或2所述的方法,其中,还包括:
    在视频帧序列中的每个视频帧对应的目标三维网格采用的所述第一量化信息相同的情况下,在目标码流的序列参数集中写入编码后的第一量化信息,所述目标码流是根据每个所述视频帧对应的目标三维网格的第一信息得到的;
    或者,在视频帧序列中的至少两个视频帧对应的目标三维网格采用的所述第一量化信息不同的情况下,在每个视频帧对应的码流的头部写入与所述视频帧对应的编码后的第一量化信息,每个视频帧对应的码流是根据所述视频帧对应的目标三维网格的第一信息得到的。
  4. 根据权利要求1所述的方法,其中,所述编码端对所述第二量化信息进行编码,包括:
    所述编码端对所述第二量化参数进行编码;
    或者,所述编码端获取所述第二量化参数相对于第二基准量化参数的第二偏移值,所述第二基准量化参数为目标画面组GOP中设置的与所述第一部分几何信息对应的基准量 化参数,目标GOP为所述目标三维网络对应的视频帧所在的GOP;所述编码端对所述第二偏移值进行编码。
  5. 根据权利要求1或4所述的方法,其中,还包括:
    在视频帧序列中的每个视频帧对应的第一部分几何信息采用的所述第二量化信息相同的情况下,在目标码流的序列参数集中写入编码后的第二量化信息,所述目标码流是根据每个所述视频帧对应的目标三维网格的第一信息得到的;
    或者,在视频帧序列中的至少两个视频帧对应的第一部分几何信息采用的所述第二量化信息不同的情况下,在每个视频帧对应的码流的头部写入与所述视频帧对应的编码后的第二量化信息,每个视频帧对应的码流是根据所述视频帧对应的目标三维网格的第一信息得到的。
  6. 根据权利要求1所述的方法,其中,所述编码端根据第一量化参数,对所述目标三维网格的几何信息进行量化处理,得到第一信息,包括:
    所述编码端根据每一分量的第一量化参数,对所述目标三维网格中的每一顶点进行量化,获取第一精度几何信息。
  7. 根据权利要求6所述的方法,其中,所述编码端根据第一量化参数,对目标三维网格的几何信息进行量化处理,得到第一信息,还包括:
    所述编码端根据所述第一精度几何信息以及所述每一分量的第一量化参数,获取第二精度几何信息。
  8. 根据权利要求6所述的方法,其中,所述编码端根据第一量化参数,对目标三维网格的几何信息进行量化处理,得到第一信息,还包括:
    所述编码端根据所述目标三维网格的几何信息和所述第一精度几何信息,确定补充点的信息。
  9. 根据权利要求1或8所述的方法,其中,所述补充点的信息,还包括以下至少一项:
    补充点对应的第一精度几何信息中顶点的索引;
    补充点的第三精度几何信息,所述第三精度几何信息为补充点被量化后的三维坐标信息。
  10. 根据权利要求1所述的方法,其中,所述编码端对量化处理后的第一信息进行编码,包括:
    所述编码端对量化处理后的第一信息进行处理,获取第二信息,所述第二信息包括占位图和几何图中的至少一项;
    所述编码端对所述第二信息进行编码。
  11. 根据权利要求10所述的方法,其中,在量化处理后的所述第一信息包括第一精度几何信息的情况下,所述对量化处理后的第一信息进行处理,获取第二信息,包括:
    所述编码端对所述第一精度几何信息进行三维片划分;
    所述编码端将划分的三维片进行二维投影,获取二维片;
    所述编码端将所述二维片进行打包,获取二维图像信息;
    所述编码端根据所述二维图像信息,获取第一精度的占位图和第一精度的几何图。
  12. 根据权利要求11所述的方法,其中,在所述将所述二维片进行打包,获取二维图像信息之后,还包括:
    所述编码端根据获取二维图像信息过程中的信息,获取片信息;
    所述编码端对所述片信息进行编码,获取片信息子码流。
  13. 根据权利要求10所述的方法,其中,在所述量化处理后的第一信息包括第二精度几何信息的情况下,所述对量化处理后的第一信息进行处理,获取第二信息,包括:
    所述编码端获取第一精度几何信息中所包含的顶点的排列顺序;
    所述编码端将第一精度几何信息中所包含的顶点对应的第二精度几何信息排列在二维图像中,生成第二精度的几何图。
  14. 根据权利要求10所述的方法,其中,所述编码端对所述第二信息进行编码,包括:
    所述编码端对第一精度的几何图和第二精度的几何图进行编码,获取几何图子码流。
  15. 根据权利要求10所述的方法,其中,在所述量化处理后的第一信息包括补充点的信息的情况下,所述对量化处理后的第一信息进行处理,获取第二信息,包括:
    所述编码端将所述补充点的第三精度几何信息排列成第一原始片;
    所述编码端按照与所述第一原始片相同的排列顺序,将所述补充点的第四精度几何信息排列成第二原始片;
    所述编码端对所述第一原始片和所述第二原始片进行压缩,获取补充点的几何图。
  16. 一种解码方法,包括:
    解码端对获取的码流进行解码处理,获取量化信息和第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息,所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;所述量化信息包括用于指示第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息,所述第二量化参数为对所述第一信息中的第一部分几何信息进行量化处理的量化参数,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
    所述解码端根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格;
    其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
  17. 根据权利要求16所述的方法,其中,所述解码端获取第一量化参数包括:
    所述解码端获取第一量化参数相对于第一基准量化参数的第一偏移值,所述第一基准量化参数为目标画面组GOP中设置的与所述目标三维网格的几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;
    根据所述第一偏移值和所述第一基准量化参数,获取所述第一量化参数。
  18. 根据权利要求16所述的方法,其中,所述解码端获取第一量化信息包括:
    在视频帧序列中的每个视频帧对应的目标三维网格采用的所述第一量化信息相同的情况下,在目标码流的序列参数集中获取所述第一量化信息,所述目标码流是编码端对每个所述视频帧对应的目标三维网格的第一信息进行编码得到的;
    或者,在视频帧序列中的至少两个视频帧对应的目标三维网格采用的所述第一量化信息不同的情况下,在每个视频帧对应的码流的头部获取与所述视频帧对应的第一量化信息,每个视频帧对应的码流是编码端对所述视频帧对应的目标三维网格的第一信息进行编码得到的。
  19. 根据权利要求16所述的方法,其中,所述解码端获取第二量化参数,包括:
    所述解码端获取所述第二量化参数相对于第二基准量化参数的第二偏移值,所述第二基准量化参数为目标画面组GOP中设置的与所述第一部分几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;
    所述解码端根据所述第二偏移值和所述第二基准量化参数,获取所述第二量化参数。
  20. 根据权利要求16所述的方法,其中,所述解码端获取第二量化参数,包括:
    在视频帧序列中的每个视频帧对应的第一部分几何信息采用的所述第二量化参数相同的情况下,在目标码流的序列参数集中获取第二量化参数,所述目标码流是编码端对每个所述视频帧对应的目标三维网格的第一信息进行编码得到的;
    或者,在视频帧序列中的至少两个视频帧对应的第一部分几何信息采用的所述第二量化参数不同的情况下,在每个视频帧对应的码流的头部获取与所述视频帧对应的编码后的第二量化参数,每个视频帧对应的码流是编码端对所述视频帧对应的目标三维网格的第一信息进行编码得到的。
  21. 根据权利要求16所述的方法,其中,所述解码端根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格,包括:
    所述解码端根据所述第二量化信息,对所述第一信息中的第一部分几何信息进行反量化处理;
    所述解码端根据所述第一量化参数,对量化处理后的第一信息进行反量化处理,获取目标三维网络。
  22. 根据权利要求16所述的方法,其中,所述解码端对获取的码流进行解码处理,获取第一信息,包括:
    所述解码端根据获取的码流,获取目标子码流,所述目标子码流包括:片信息子码流、占位图子码流和几何图子码流;
    所述解码端根据所述目标子码流,获取第二信息,所述第二信息包括:占位图和几何图中的至少一项;
    所述解码端根据所述第二信息,获取所述第一信息。
  23. 根据权利要求22所述的方法,其中,在所述第一信息包括第一精度几何信息的情况下,所述根据所述第二信息,获取第一信息,包括:
    所述解码端根据第一精度的占位图和第一精度的几何图,获取二维图像信息;
    所述解码端根据所述二维图像信息,获取二维片;
    所述解码端根据所述片信息子码流对应的片信息对所述二维片进行三维逆投影,获取三维片;
    所述解码端根据所述三维片,获取第一精度几何信息。
  24. 根据权利要求22所述的方法,其中,在所述第一信息包括第二精度几何信息的情况下,所述根据所述第二信息,获取第一信息,包括:
    所述解码端根据第二精度的几何图,获取第二精度几何信息。
  25. 根据权利要求22所述的方法,其中,在所述第一信息包括补充点的信息的情况下,所述根据所述第二信息,获取第一信息,包括:
    所述解码端根据补充点的几何图,确定所述补充点的第三精度几何信息对应的第一原始片以及所述补充点的第四精度几何信息对应的第二原始片;
    所述解码端根据所述第一原始片和所述第二原始片,确定补充点的信息。
  26. 根据权利要求21所述的方法,其中,所述解码端根据所述第一量化参数,对量化处理后的所述第一信息进行反量化处理,获取目标三维网格,包括:
    所述解码端根据所述第一精度几何信息以及每一分量的第一量化参数,确定所述第一精度几何信息中的每一顶点的坐标。
  27. 根据权利要求26所述的方法,其中,所述解码端根据所述第一量化参数,对量化处理后的所述第一信息进行反量化处理,获取目标三维网格,还包括:
    所述解码端根据所述目标三维网格中的每一顶点的坐标以及所述第二精度几何信息,确定所述目标三维网格。
  28. 根据权利要求26所述的方法,其中,所述解码端根据所述第一量化参数,对量化处理后的所述第一信息进行反量化处理,获取目标三维网格,还包括:
    所述解码端将所述补充点的信息以及所述第一精度几何信息中的每一顶点的坐标,确定所述目标三维网格。
  29. 根据权利要求16或28所述的方法,其中,所述补充点的信息,还包括以下至少一项:
    补充点对应的第一精度几何信息中顶点的索引;
    补充点的第三精度几何信息,所述第三精度几何信息为补充点被量化后的三维坐标信息。
  30. 一种编码装置,包括:
    第一处理模块,用于根据第一量化参数,对目标三维网格的几何信息进行量化处理,得到第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、 补充点的信息;所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;
    第二处理模块,用于根据第二量化参数对所述第一信息中的第一部分几何信息进行量化处理,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
    第一编码模块,用于对量化处理后的第一信息和量化信息进行编码,所述量化信息包括用于指示所述第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息;
    其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
  31. 根据权利要求30所述的装置,其中,所述第一编码模块用于对所述第一量化参数进行编码;
    或者,用于获取所述第一量化参数相对于第一基准量化参数的第一偏移值,所述第一基准量化参数为目标画面组GOP中设置的与所述目标三维网格的几何信息对应的基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;对所述第一偏移值进行编码。
  32. 一种编码设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至15任一项所述的编码方法的步骤。
  33. 一种解码装置,包括:
    第三处理模块,用于对获取的码流进行解码处理,获取量化信息和第一信息,所述第一信息包括以下至少一项:第一精度几何信息、第二精度几何信息、补充点的信息,所述补充点的信息包括所述补充点的第四精度几何信息,所述第四精度几何信息为补充点在被量化过程中丢失的三维坐标信息;所述量化信息包括用于指示第一量化参数的第一量化信息和用于指示第二量化参数的第二量化信息,所述第二量化参数为对所述第一信息中的第一部分几何信息进行量化处理的量化参数,所述第一部分几何信息包括第二精度几何信息和补充点的第四精度几何信息中的至少一项;
    第四处理模块,用于根据所述量化信息,对所述第一信息进行反量化处理,获取目标三维网格;
    其中,所述第一精度几何信息为所述目标三维网格量化后的几何信息,所述第二精度几何信息为所述目标三维网格量化过程中丢失的几何信息,所述补充点的信息为量化过程中产生的需要额外处理的点的信息。
  34. 根据权利要求33所述的装置,其中,所述第三处理模块包括:
    第一获取子模块,用于获取第一量化参数相对于第一基准量化参数的第一偏移值,所述第一基准量化参数为目标画面组GOP中设置的与所述目标三维网格的几何信息对应的 基准量化参数,所述目标GOP为所述目标三维网络对应的视频帧所在的GOP;
    第二获取子模块,用于根据所述第一偏移值和所述第一基准量化参数,获取所述第一量化参数。
  35. 一种解码设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,其中,所述程序或指令被所述处理器执行时实现如权利要求16至29任一项所述的解码方法的步骤。
  36. 一种可读存储介质,所述可读存储介质上存储程序或指令,其中,所述程序或指令被处理器执行时实现如权利要求1至15任一项所述的编码方法的步骤或如权利要求16至29任一项所述的解码方法的步骤。
PCT/CN2023/083347 2022-03-25 2023-03-23 编码、解码方法、装置及设备 WO2023179705A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210307829.0 2022-03-25
CN202210307829.0A CN116847083A (zh) 2022-03-25 2022-03-25 编码、解码方法、装置及设备

Publications (1)

Publication Number Publication Date
WO2023179705A1 true WO2023179705A1 (zh) 2023-09-28

Family

ID=88100063

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/083347 WO2023179705A1 (zh) 2022-03-25 2023-03-23 编码、解码方法、装置及设备

Country Status (2)

Country Link
CN (1) CN116847083A (zh)
WO (1) WO2023179705A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243958A (zh) * 2014-09-29 2014-12-24 联想(北京)有限公司 三维网格数据的编码、解码方法以及编码、解码装置
GB2551387A (en) * 2016-06-17 2017-12-20 Canon Kk Improved encoding and decoding of geometry data in 3D mesh models
CN110785791A (zh) * 2017-06-22 2020-02-11 交互数字Vc控股公司 编码和重建点云的方法和设备
CN111512342A (zh) * 2017-12-22 2020-08-07 三星电子株式会社 在点云压缩中处理重复点的方法和装置
US20210211724A1 (en) * 2020-01-08 2021-07-08 Apple Inc. Video-Based Point Cloud Compression with Variable Patch Scaling
WO2021194065A1 (ko) * 2020-03-23 2021-09-30 엘지전자 주식회사 포인트 클라우드 데이터 처리 장치 및 방법
US20220028119A1 (en) * 2018-12-13 2022-01-27 Samsung Electronics Co., Ltd. Method, device, and computer-readable recording medium for compressing 3d mesh content

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243958A (zh) * 2014-09-29 2014-12-24 联想(北京)有限公司 三维网格数据的编码、解码方法以及编码、解码装置
GB2551387A (en) * 2016-06-17 2017-12-20 Canon Kk Improved encoding and decoding of geometry data in 3D mesh models
CN110785791A (zh) * 2017-06-22 2020-02-11 交互数字Vc控股公司 编码和重建点云的方法和设备
CN111512342A (zh) * 2017-12-22 2020-08-07 三星电子株式会社 在点云压缩中处理重复点的方法和装置
US20220028119A1 (en) * 2018-12-13 2022-01-27 Samsung Electronics Co., Ltd. Method, device, and computer-readable recording medium for compressing 3d mesh content
US20210211724A1 (en) * 2020-01-08 2021-07-08 Apple Inc. Video-Based Point Cloud Compression with Variable Patch Scaling
WO2021194065A1 (ko) * 2020-03-23 2021-09-30 엘지전자 주식회사 포인트 클라우드 데이터 처리 장치 및 방법

Also Published As

Publication number Publication date
CN116847083A (zh) 2023-10-03

Similar Documents

Publication Publication Date Title
US10798389B2 (en) Method and apparatus for content-aware point cloud compression using HEVC tiles
KR20210096285A (ko) 3차원 메쉬 컨텐트를 압축하기 위한 방법, 장치 및 컴퓨터 판독 가능한 기록 매체
CN114503553A (zh) 基于视频的点云压缩模型到世界信令信息
KR20200007733A (ko) Pcc 데이터의 효과적인 압축을 위한 부호화/복호화 방법 및 장치
WO2023179705A1 (zh) 编码、解码方法、装置及设备
TW202036483A (zh) 拼貼擴充方法、編碼器與解碼器
WO2023172703A1 (en) Geometry point cloud coding
CN116940965A (zh) 用于网格压缩的片时间对齐解码
WO2023174337A1 (zh) 编码、解码方法、装置及设备
US11887342B2 (en) Method and device for encoding three-dimensional image, and method and device for decoding three-dimensional image
WO2023155794A1 (zh) 编码、解码方法、装置及设备
WO2023174336A1 (zh) 编码、解码方法、装置及设备
WO2023174334A1 (zh) 编码、解码方法、装置及设备
CN116648915A (zh) 点云编解码方法、编码器、解码器以及计算机存储介质
WO2023155778A1 (zh) 编码方法、装置及设备
WO2023197990A1 (zh) 编码方法、解码方法及终端
WO2023179706A1 (zh) 编码方法、解码方法及终端
WO2023123284A1 (zh) 一种解码方法、编码方法、解码器、编码器及存储介质
US20240121435A1 (en) Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
WO2023179710A1 (zh) 编码方法及终端
WO2022257143A1 (zh) 帧内预测、编解码方法及装置、编解码器、设备、介质
US20240020885A1 (en) Point cloud data transmission method, point cloud data transmission device, point cloud data reception method, and point cloud data reception device
US20230412837A1 (en) Point cloud data transmission method, point cloud data transmission device, point cloud data reception method, and point cloud data reception device
WO2023097694A1 (zh) 解码方法、编码方法、解码器以及编码器
WO2024085936A1 (en) System and method for geometry point cloud coding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23773951

Country of ref document: EP

Kind code of ref document: A1