WO2024017008A1 - 编码、解码方法、装置及设备 - Google Patents

编码、解码方法、装置及设备 Download PDF

Info

Publication number
WO2024017008A1
WO2024017008A1 PCT/CN2023/104351 CN2023104351W WO2024017008A1 WO 2024017008 A1 WO2024017008 A1 WO 2024017008A1 CN 2023104351 W CN2023104351 W CN 2023104351W WO 2024017008 A1 WO2024017008 A1 WO 2024017008A1
Authority
WO
WIPO (PCT)
Prior art keywords
vertex
target
texture
coordinates
triangle
Prior art date
Application number
PCT/CN2023/104351
Other languages
English (en)
French (fr)
Inventor
邹文杰
张伟
杨付正
吕卓逸
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2024017008A1 publication Critical patent/WO2024017008A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/004Predictors, e.g. intraframe, interframe coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Definitions

  • This application belongs to the field of coding and decoding technology, and specifically relates to a coding and decoding method, device and equipment.
  • Texture coordinates also known as UV coordinates
  • UV coordinates are information that describes the texture of the vertices of a three-dimensional mesh.
  • the three-dimensional grid first projects the surface texture into two dimensions to form a two-dimensional texture map.
  • UV coordinates represent the position of the three-dimensional vertex texture in the two-dimensional texture map, and correspond to the geometric information one-to-one. Therefore, texture coordinates determine the texture map of the three-dimensional mesh and are an important part of the three-dimensional mesh.
  • Embodiments of the present application provide a coding and decoding method, device and equipment, which can solve the problem in related technologies that the prediction residual increases due to the loss of data accuracy, thereby reducing the coding efficiency of texture coordinates.
  • the first aspect provides an encoding method, including:
  • the encoding end reconstructs the geometric information and connection relationships of the target three-dimensional grid based on the coding results of the target three-dimensional grid's geometric information and connection relationships;
  • the encoding end performs a first shift operation on the coordinates of each vertex in the target three-dimensional network based on the reconstructed geometric information and connection relationships to obtain the shifted coordinates of each vertex; the bits corresponding to the shifted coordinates The length is greater than the bit length corresponding to the coordinates;
  • the encoding end encodes the texture coordinates of each vertex in the target three-dimensional grid based on the shift coordinates of each vertex.
  • the second aspect provides a decoding method, including:
  • the decoder decodes the obtained code stream corresponding to the target three-dimensional grid to obtain the geometric information and connection relationship of the target three-dimensional grid, and decodes the obtained code stream corresponding to each vertex to obtain the texture coordinate residual of each vertex. ;
  • the decoding end performs a first shift operation on the texture coordinate residual of each vertex to obtain the target of each vertex. Residual; the bit length corresponding to the target residual is greater than the bit length corresponding to the texture coordinate residual;
  • the decoder determines the real texture coordinates of each vertex based on the geometric information and connection relationships of the target three-dimensional mesh and the target residual of each vertex.
  • an encoding device including:
  • the reconstruction module is used to reconstruct the geometric information and connection relationships of the target three-dimensional grid based on the coding results of the target three-dimensional grid's geometric information and connection relationships;
  • a shift module configured to perform a first shift operation on the coordinates of each vertex in the target three-dimensional network based on the reconstructed geometric information and connection relationships to obtain the shift coordinates of each vertex; the shift coordinates correspond to The bit length is greater than the bit length corresponding to the coordinates;
  • An encoding module configured to encode the texture coordinates of each vertex in the target three-dimensional grid based on the shift coordinates of each vertex.
  • a decoding device including:
  • the decoding module is used to decode the obtained code stream corresponding to the target three-dimensional grid to obtain the geometric information and connection relationship of the target three-dimensional grid, and to decode the obtained code stream corresponding to each vertex to obtain the texture of each vertex. coordinate residual;
  • a shift module configured to perform a first shift operation on the texture coordinate residual of each vertex to obtain a target residual of each vertex; the bit length corresponding to the target residual is greater than the texture coordinate residual The corresponding bit length;
  • a determination module configured to determine the real texture coordinates of each vertex based on the geometric information and connection relationships of the target three-dimensional mesh and the target residual of each vertex.
  • a terminal in a fifth aspect, includes a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the program or instructions are executed by the processor, the following implementations are implemented: The steps of the method described in one aspect, or the steps of implementing the method described in the second aspect.
  • a readable storage medium is provided. Programs or instructions are stored on the readable storage medium. When the programs or instructions are executed by a processor, the steps of the method described in the first aspect are implemented, or the steps of the method are implemented as described in the first aspect. The steps of the method described in the second aspect.
  • a chip in a seventh aspect, includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the method described in the first aspect. , or implement the method described in the second aspect.
  • a computer program/program product is provided, the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement the method described in the first aspect The steps of a method, or steps of implementing a method as described in the second aspect.
  • a system in a ninth aspect, includes an encoding end and a decoding end.
  • the encoding end performs the steps of the method described in the first aspect.
  • the decoding end performs the steps of the method described in the second aspect. step.
  • the geometric information and connection relationships of the target three-dimensional grid are reconstructed according to the coding results of the target three-dimensional grid.
  • each target three-dimensional network is reconstructed. Perform the first shift operation on the coordinates of the vertices to obtain the shift coordinates of each vertex; based on the shift of each vertex Coordinates, encoding the texture coordinates of each vertex in the target 3D mesh.
  • the first shift operation is performed on the coordinates of each vertex in the target three-dimensional network to obtain the shifted coordinates of each vertex, in which the bit length corresponding to the shifted coordinate is greater than the bit length corresponding to the coordinate, so that Use more bits to store the shifted coordinates of the vertices.
  • the subsequent process of UV coordinate prediction based on shifted coordinates use a high-precision method to store coordinate data to avoid prediction residuals due to loss of data accuracy. And increases, thereby improving the encoding efficiency of texture coordinates.
  • Figure 1 is a schematic flow chart of an encoding method provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of a search target triangle provided by an embodiment of the present application.
  • Figure 3 is a geometric schematic diagram of the prediction principle provided by the embodiment of the present application.
  • Figure 4 is a schematic diagram of the UV coordinate encoding framework provided by the embodiment of the present application.
  • Figure 5 is a schematic flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the UV coordinate decoding framework provided by the embodiment of the present application.
  • Figure 7 is a structural diagram of an encoding device provided by an embodiment of the present application.
  • Figure 8 is a structural diagram of a decoding device provided by an embodiment of the present application.
  • Figure 9 is a structural diagram of a communication device provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of the hardware structure of a terminal provided by an embodiment of the present application.
  • first, second, etc. in the description and claims of this application are used to distinguish similar objects and are not used to describe a specific order or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and “first” and “second” are intended to distinguish It is usually one type, and the number of objects is not limited.
  • the first object can be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
  • Figure 1 is a schematic flowchart of an encoding method provided by an embodiment of the present application.
  • the encoding method provided in this embodiment includes the following steps:
  • the encoding end reconstructs the geometric information and connection relationships of the target three-dimensional grid based on the coding results of the geometric information and connection relationships of the target three-dimensional grid.
  • the target three-dimensional grid mentioned in this application can be understood as the three-dimensional grid corresponding to any video frame.
  • the geometric information of the target three-dimensional grid can be understood as the coordinates of the vertices in the three-dimensional grid. These coordinates are usually Refers to three-dimensional coordinates; the connection relationship is used to describe the connection relationship between elements such as vertices and patches in the three-dimensional grid, and can also be called a connectivity relationship.
  • the texture coordinates of the vertices are encoded based on geometric information and connection relationships.
  • the geometric information and connection relationships that are encoded and then reconstructed are used.
  • the encoding end performs a first shift operation on the coordinates of each vertex in the target three-dimensional network based on the reconstructed geometric information and connection relationships to obtain the shift coordinates of each vertex.
  • a first shift operation is performed on the coordinates of each vertex in the target three-dimensional network to obtain the shifted coordinates of each vertex, where the bit length corresponding to the shifted coordinate is greater than the bit length corresponding to the coordinate. Convert low-precision bit-length coordinates to high-precision bit-length coordinates.
  • the encoding end encodes the texture coordinates of each vertex in the target three-dimensional grid based on the shift coordinates of each vertex.
  • the geometric information and connection relationships of the target three-dimensional grid are reconstructed according to the coding results of the target three-dimensional grid.
  • each target three-dimensional network is reconstructed.
  • the coordinates of the vertices perform a first shift operation to obtain the shift coordinates of each vertex; based on the shift coordinates of each vertex, the texture coordinates of each vertex in the target three-dimensional grid are encoded.
  • the first shift operation is performed on the coordinates of each vertex in the target three-dimensional network to obtain the shifted coordinates of each vertex, where the bit length corresponding to the shifted coordinate is greater than the bit length corresponding to the coordinate, using the bit Bits with more digits store the shifted coordinates of the vertices.
  • coordinate data is stored in a high-precision way to avoid prediction residuals from increasing due to loss of data accuracy. Large, thus improving the encoding efficiency of texture coordinates.
  • performing a first shift operation on the coordinates of each vertex in the target three-dimensional network to obtain the shift coordinates of each vertex includes:
  • the encoding end increases the number of bits occupied by the coordinates of the vertex for any vertex in the target three-dimensional grid, and obtains the first target bit corresponding to the coordinates of the vertex;
  • the encoding end uses the first target bit to store the coordinates of the vertex to obtain the shifted coordinates of the vertex.
  • the above-mentioned first shift operation is used to increase the number of bits occupied by the coordinates of the vertex.
  • increasing the number of bits occupied by the coordinates of the vertex includes:
  • the encoding end uses a first shift parameter to perform a binary left shift on the coordinates of the vertex.
  • Another optional implementation is to add a first preset number of bits to the bits occupied by the coordinates of the vertex to obtain the first target bits.
  • the bits after the increased number of digits are called first target bits.
  • first target bits By using the first target bits to store the coordinates of the vertex, more coordinate data can be stored to obtain the shifted coordinates of the vertex.
  • the specific implementation method of using the first shift parameter to perform a binary left shift of the coordinates of the vertex is:
  • a Cartesian coordinate system is established in the target three-dimensional grid, and the coordinates of each vertex are represented by this coordinate system.
  • C′ uvx C uvx ⁇ leftshift, where C uvx is the coordinate , leftshift is the shift parameter, C′ uvx is the shift coordinate.
  • bit length of C uvx is M
  • leftshift represents a left shift of K1 bits.
  • bit length of C′ uvx is M+K1, where K1 is a positive integer.
  • encoding the texture coordinates of each vertex in the target three-dimensional mesh based on the shift coordinates of each vertex includes:
  • the encoding end determines N predicted texture coordinates of each vertex in the target three-dimensional grid based on the shift coordinates of each vertex, where N is a positive integer greater than 1;
  • the encoding end encodes the texture coordinate residual of each vertex; the texture coordinate residual of the vertex is determined based on the N predicted texture coordinates of the vertex.
  • the encoding end may use multiple encoded triangles to predict vertices to encode the texture coordinates of each vertex in the target three-dimensional mesh.
  • multiple coded triangles are used to predict the vertices, and N predicted texture coordinates of each vertex are determined.
  • the texture coordinate residual of the vertex can be determined based on the N predicted texture coordinates of the vertex, and then the texture coordinate residual of the vertex is encoded.
  • a specific implementation of how to encode the texture coordinate residual of each vertex is: See subsequent examples.
  • determining the N predicted texture coordinates of each vertex in the target three-dimensional mesh based on the shift coordinate of each vertex includes:
  • the encoding end selects the first edge in the edge set, and determines the triangle corresponding to the first edge and the triangle with the vertex to be encoded as the opposite vertex and not including the first edge as the target triangle; the target triangle
  • the vertices other than the vertices to be coded are coded vertices, and the opposite vertex of the first side in the triangle corresponding to the first side is the vertex to be coded;
  • the encoding end obtains the predicted texture coordinates of the vertex to be encoded in the target triangle.
  • the initial edge set needs to be obtained first. Specifically, the initial edge set is obtained in the following way:
  • the method further includes:
  • the encoding end selects an initial triangle based on the reconstructed geometric information and connection relationships
  • the encoding end encodes the texture coordinates of the three vertices of the initial triangle, and stores the three edges of the initial triangle into an edge set.
  • the vertices are not predicted, but the texture coordinates are directly encoded.
  • the texture coordinates of the first vertex of the initial triangle can be directly encoded; the texture coordinates of the first vertex are used to predict the edges, and the texture coordinates of the second vertex of the initial triangle are obtained; using Similar triangle prediction coding method is used to obtain the texture coordinates of the third vertex of the initial triangle.
  • each edge of the initial triangle is stored as an edge set to form an initial edge set, and then subsequent vertices are predicted based on the initial edge set.
  • Figure 2 includes three triangles, the first triangle formed by vertex C, vertex N and vertex P.O, the second triangle formed by vertex C, vertex N and vertex P and The third triangle formed by vertex C, vertex P and vertex N.O, and vertex N, vertex P, vertex P.O and vertex N.O are all uncoded vertices.
  • vertex C is the vertex to be encoded, and the vertices corresponding to the first side are vertex N and vertex P, then the triangle corresponding to the first side, that is, the second triangle, is determined as the target triangle. Further, rotate around vertex C to search for triangles whose two vertices except vertex C are encoded vertices and do not include the first side, and determine the above triangle as the target triangle; that is, the first triangle and the second triangle are both determined as the target triangle.
  • the vertices in the target triangle except the vertices to be encoded are coded vertices, and the number of target triangles is greater than 1.
  • the multiple target triangles are adjacent triangles, or the multiple target triangles are not mutually exclusive. adjacent.
  • encoding the texture coordinate residual of each vertex includes:
  • the encoding end determines the target value corresponding to the N predicted texture coordinates of the vertex as the first target texture coordinate of the vertex;
  • the encoding end performs a second shift operation on the first target texture coordinates of the vertex to obtain the second target texture coordinates;
  • the encoding end encodes the texture coordinate residual of the vertex, and the texture coordinate residual is determined based on the real texture coordinate of the vertex and the second target texture coordinate of the vertex.
  • the above-mentioned N predicted texture coordinates can be weighted and summed, and the target value obtained by the weighted summation is determined as the first target texture coordinate of the vertex, where, when the weight corresponding to each predicted texture coordinate is the same , the average value of N predicted texture coordinates is determined as the first target texture coordinate of the vertex.
  • the target value is not limited to the weighted sum of N predicted texture coordinates.
  • the target value corresponding to the N predicted texture coordinates can also be calculated through other calculation methods. This is not the case. Make specific limitations.
  • the vertex to be encoded corresponds to 3 predicted texture coordinates, namely Pred C_NP , Pred C_PON and Pred C_PNO .
  • Pred C_NP is the predicted texture coordinate corresponding to the vertex to be encoded in the second triangle
  • Pred C_PON is the predicted texture coordinate corresponding to the vertex to be encoded in the first triangle
  • Pred C_PNO is the predicted texture coordinate corresponding to the vertex to be encoded in the third triangle. Texture coordinates.
  • the average value of Pred C_NP , Pred C_PON and Pred C_PNO is determined as the first target texture coordinate of the vertex to be encoded.
  • performing a second shift operation on the first target texture coordinates of the vertex to obtain the second target texture coordinates includes:
  • the encoding end reduces the number of bits occupied by the first target texture coordinate of the vertex, and obtains the second target bit corresponding to the first target texture coordinate of the vertex;
  • the encoding end uses the second target bit to store the first target texture coordinate of the vertex, and obtains the second target texture coordinate of the vertex; the bit length corresponding to the second target texture coordinate is smaller than the first target texture coordinate of the vertex.
  • the bit length corresponding to a target texture coordinate is smaller than the first target texture coordinate of the vertex.
  • the number of bits occupied by the first target texture coordinates of the vertex is reduced through a second shift operation.
  • reducing the number of bits occupied by the first target texture coordinate of the vertex includes:
  • the encoding end uses a second shift parameter to perform a binary right shift on the first target texture coordinate of the vertex.
  • Another optional implementation is to reduce the second preset number of bits based on the bits occupied by the first target texture coordinate of the vertex to obtain the second target bits.
  • the reduced bits are called second target bits, and the second target bits are used to store the first target texture coordinates of the vertex to obtain the second target texture coordinates of the vertex.
  • A' (Abs(A)+offset)>>rightshift
  • rightshift is the shift parameter
  • A is the first target texture coordinate
  • A' is the second target texture coordinate
  • Abs is the absolute value operation.
  • the offset parameter is related to the number of bits of translation represented by the shift parameter.
  • bit length of the first target texture coordinate is M+K2
  • rightshift represents a right shift of K2 bits.
  • bit length of the second target texture coordinate is M, where K2 is a positive integer.
  • encoding the texture coordinate residual of each vertex includes:
  • the encoding end determines the target value corresponding to the N predicted texture coordinates of the vertex as the first target texture coordinate of the vertex;
  • the encoding end determines the texture coordinate residual of the vertex based on the real texture coordinates of the vertex and the first target texture coordinate of the vertex;
  • the encoding end encodes the texture coordinate residual after performing the second shift operation.
  • the first target texture coordinate value of the vertex is determined.
  • the specific implementation manner is consistent with the above embodiment, and will not be repeated here.
  • the residuals between the vertices to be encoded are obtained according to the first target texture coordinates and the real texture coordinates, and the texture coordinate residuals of the vertices are determined.
  • the first target texture coordinates and the real texture coordinates can be subtracted to obtain the texture coordinate residual of the vertex to be encoded.
  • a second shift operation is performed on the texture coordinate residual, and the texture coordinate residual after the second shift operation is performed is encoded.
  • the implementation of performing the second shift operation on the texture coordinate residual is consistent with the above-mentioned implementation of the second shift operation, and will not be repeated here.
  • encoding the texture coordinates of each vertex in the target three-dimensional mesh based on the shift coordinates of each vertex includes:
  • the encoding end encodes the texture coordinates of each vertex in the target three-dimensional grid based on the shift coordinates of each vertex.
  • the method of predicting vertices using similar triangles is used to encode the texture coordinates of the vertices, which can ensure the accuracy of the predicted vertices and thus the accuracy of the texture coordinate encoding.
  • encoding the texture coordinates of each vertex in the target three-dimensional mesh based on the shift coordinates of each vertex includes:
  • the encoding end selects the first edge in the edge set and obtains the predicted texture coordinates of the vertex to be encoded in the target triangle;
  • the target triangle is the triangle corresponding to the first edge, and the opposite vertex of the target triangle is the Describe the vertices to be encoded;
  • the encoding end encodes the texture coordinate residual between the real texture coordinates of the vertex to be encoded and the predicted texture coordinates of the vertex to be encoded.
  • the initial edge set needs to be obtained first. Specifically, the initial edge set is obtained in the following way:
  • the encoding end selects an initial triangle based on the reconstructed geometric information and connection relationships.
  • the encoding end encodes the texture coordinates of the three vertices of the initial triangle, and stores the three edges of the initial triangle into an edge set. .
  • the vertices are not predicted, but the texture coordinates are directly encoded. After encoding the texture coordinates of each vertex of the initial triangle, each edge of the initial triangle is stored. Set to form an initial set of edges, and then predict subsequent vertices based on this initial set of edges.
  • the residual of the vertex to be encoded can be obtained based on the predicted texture coordinates and the real texture coordinates, and the encoding of the vertex to be encoded is achieved by encoding the residual.
  • the residual can be the difference between the real texture coordinates of the vertices to be encoded and the predicted texture coordinates of the corresponding vertices of the target triangle. It can be the real texture coordinates of the vertices to be encoded minus the predicted texture coordinates of the corresponding vertices of the target triangle. It is obtained by predicting texture coordinates, or it can be obtained by subtracting the predicted texture coordinates of the vertices of the target triangle from the real texture coordinates of the vertices to be encoded.
  • texture coordinate residuals include:
  • the encoding end performs a second shift operation on the predicted texture coordinates of the vertex to be encoded to obtain a third target texture coordinate of the vertex to be encoded;
  • the encoding end encodes the texture coordinate residual between the real texture coordinates of the vertex to be encoded and the third target texture coordinates of the vertex to be encoded.
  • a second shift operation may be performed on the predicted texture coordinates of the vertex to be encoded to obtain the third target texture coordinates of the vertex to be encoded. It should be understood that the specific implementation manner of performing the second shift operation on the predicted texture coordinates is consistent with the implementation manner of performing the second shift operation in the above embodiment, and will not be repeated here.
  • the texture coordinate residual is encoded based on the texture coordinate residual between the real texture coordinate of the vertex to be encoded and the third target texture coordinate of the vertex to be encoded.
  • the difference between the real texture coordinates and the third target texture coordinates may be determined as the texture coordinate residual.
  • the encoding of the true texture coordinates of the vertex to be encoded and the texture coordinate residual of the predicted texture coordinates of the vertex to be encoded includes:
  • the encoding end determines the texture coordinate residual of the vertex to be encoded based on the real texture coordinates of the vertex to be encoded and the predicted texture coordinates of the vertex to be encoded;
  • the encoding end encodes the texture coordinate residual after performing the second shift operation.
  • the texture coordinate residual of the vertex to be encoded may be determined based on the real texture coordinates of the vertex to be encoded and the predicted texture coordinates of the vertex to be encoded. Further, a second shift operation is performed on the texture coordinate residual, and the texture coordinate residual after the second shift operation is performed is encoded.
  • obtaining the predicted texture coordinates of the vertex to be encoded in the target triangle includes:
  • the encoding end obtains the texture coordinates of the projection point of the vertex to be encoded on the first side according to the geometric coordinates of each vertex of the target triangle;
  • the encoding end obtains the predicted texture coordinates of the vertex to be encoded based on the texture coordinates of the projection point.
  • the texture coordinates of the projection point of the vertex to be encoded on the first side can be obtained according to the geometric coordinates of each vertex of the target triangle, that is, the geometric coordinates of the three vertices of the target triangle, Please refer to subsequent embodiments for specific implementation methods.
  • the predicted texture coordinates of the vertex to be encoded are obtained according to the texture coordinates of the projection point. Please refer to subsequent embodiments for specific implementation methods.
  • obtaining the texture coordinates of the projection point of the vertex to be encoded on the first side according to the geometric coordinates of each vertex of the target triangle includes:
  • the coding end is based on The sum of N uv and N uv obtains the texture coordinates of the projection point of the vertex to be encoded on the first edge, or according to N uv and The difference value obtains the texture coordinates of the projection point of the vertex to be encoded on the first edge;
  • N uv is the texture coordinate of the vertex N on the first side of the target triangle
  • a vector from the vertex N on the first side of the triangle to the texture coordinates of the predicted projection point X of the vertex to be encoded on the first side is a vector from the predicted projection point X on the first side to the texture coordinates of the vertex N on the first side of the target triangle.
  • the encoding end can obtain the texture coordinates of the projection point of the vertex to be encoded on the first edge according to the first formula.
  • the first formula is or
  • X uv is the texture coordinate of the projection point of the vertex to be encoded on the first side
  • N uv is the texture coordinate of the vertex N on the first side of the target triangle
  • edge NP is an edge selected from the edge set, which can be regarded as the first edge mentioned above.
  • Vertex N and vertex P are two parts of the first edge respectively.
  • vertex C is the vertex to be encoded
  • vertex N, vertex P and vertex C form the above target triangle
  • point X is the projection of vertex C on the NP edge
  • vertex O is the encoded point
  • vertex O is the encoded point
  • vertex O The triangle formed by vertices N and P has NP sides in common with the triangle formed by N, P and C.
  • the texture coordinates of the projected point on the first side of the point to be encoded can be obtained based on the above-mentioned first formula.
  • obtaining the predicted texture coordinates of the vertex to be encoded according to the texture coordinates of the projection point includes:
  • the encoding end determines the value according to X uv and Obtain the texture coordinates of the vertex to be encoded; the first triangle and the target triangle have a common first side, and the opposite vertex of the first side of the first triangle is the first vertex O;
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be encoded on the first edge, is a vector from the predicted projection point X of the vertex to be encoded on the first edge to the texture coordinates of the vertex C to be encoded.
  • the texture coordinates of the vertex to be encoded can be obtained according to the second formula, where the second formula is
  • Pred C_NP is the predicted texture coordinate of the vertex to be encoded
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be encoded on the first edge
  • O uv is the texture coordinate of the first vertex corresponding to the first side of the target triangle
  • the texture coordinates of the first vertex O can be used, based on the above second formula, to obtain Encodes the predicted texture coordinates of a vertex.
  • first triangle is the triangle formed by vertex N, vertex P and vertex O in Figure 3.
  • vertex O is located on the first side formed by vertex N and vertex P, the area of the first triangle is 0, which is determined
  • the first triangle is a degenerate triangle.
  • obtaining the predicted texture coordinates of the vertex to be encoded according to the texture coordinates of the projection point includes:
  • the coding end uses X uv and Obtain the texture coordinates of the vertex to be encoded, and encode the target identifier corresponding to the vertex to be encoded; the first triangle and the target triangle have a common first side, and the first side of the first triangle The pair of vertices is the first vertex O;
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be encoded on the first edge, is a vector from the predicted projection point X of the vertex to be encoded on the first edge to the texture coordinate C uv of the vertex to be encoded.
  • the texture coordinates of the vertex to be encoded can be obtained according to the third formula, and the target identifier corresponding to the vertex to be encoded can be encoded.
  • Pred C_NP is the predicted texture coordinate of the vertex to be encoded
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be encoded on the first edge; is the vector from the predicted projection point X of the vertex to be encoded on the first edge to the texture coordinate C uv of the vertex to be encoded
  • C uv is the texture coordinate of the vertex to be encoded
  • the target identifier is used to characterize the size relationship between
  • the predicted texture coordinates of the vertex to be coded can be obtained based on the above third formula.
  • the above target identifier is used to characterize the size relationship between
  • the calculation part involving the vertex N in the above-mentioned first formula, the second formula and the third formula can be replaced by the vertex P.
  • change the first formula Replace with is the vertex P on the first side of the target triangle A vector to the texture coordinates of the predicted projection point X of the vertex to be encoded on the first edge.
  • the UV coordinate encoding framework of the embodiment of this application is shown in Figure 4.
  • the overall encoding process is:
  • the reconstructed geometric information and connection relationships can be used to encode UV coordinates; first, select a triangle as the initial triangle and directly encode the coordinate values; secondly, select the initial triangle Adjacent triangles are used as triangles to be encoded. Use similar triangles to predict vertices or multiple encoded triangles to predict vertices. Predict the UV coordinates of unencoded vertices.
  • FIG. 5 is a schematic flowchart of a decoding method provided by an embodiment of the present application.
  • the decoding method provided in this embodiment includes the following steps:
  • the decoder decodes the obtained code stream corresponding to the target three-dimensional grid to obtain the geometric information and connection relationship of the target three-dimensional grid, and decodes the obtained code stream corresponding to each vertex to obtain the texture coordinates of each vertex. Residuals.
  • the decoder performs a first shift operation on the texture coordinate residual of each vertex to obtain the target residual of each vertex.
  • the bit length corresponding to the target residual is greater than the bit length corresponding to the texture coordinate residual.
  • the decoder determines the real texture coordinates of each vertex based on the geometric information and connection relationships of the target three-dimensional mesh and the target residual of each vertex.
  • the obtained code stream corresponding to the target three-dimensional grid is decoded to obtain the geometric information and connection relationship of the target three-dimensional grid
  • the obtained code stream corresponding to each vertex is decoded to obtain the texture coordinate residual of each vertex.
  • the first shift operation is performed on the texture coordinate residual of each vertex to obtain the target residual of each vertex, where the corresponding bit length of the target residual is greater than the corresponding bit length of the texture coordinate residual, In this way, more bits are used to store the target residual.
  • coordinate data is stored in a high-precision manner to avoid prediction residuals due to loss of data accuracy. And increases, thereby improving the decoding efficiency of texture coordinates.
  • performing a first shift operation on the texture coordinate residual of each vertex to obtain the target residual of each vertex includes:
  • the decoding end increases the number of bits occupied by the texture coordinate residual of the vertex, and obtains the third target bit corresponding to the texture coordinate residual of the vertex;
  • the decoder uses the third target bit to store the texture coordinate residual of the vertex to obtain the target residual of the vertex.
  • the above-mentioned first shift operation is used to increase the texture coordinate residual of the vertex. Number of bits occupied.
  • increasing the number of bits occupied by the texture coordinate residual of the vertex includes:
  • the decoder uses a first shift parameter to perform a binary left shift on the texture coordinate residual of the vertex.
  • Another optional implementation is to add a third preset number of bits to the bits occupied by the texture coordinate residual of the vertex to obtain a third target bit.
  • the bits after the increased number of digits are called third target bits, and the third target bits are used to store the texture coordinate residual of the vertex. More residual data can be stored to obtain the target residual of the vertex.
  • determining the true texture coordinates of each vertex based on the geometric information and connection relationships of the target three-dimensional mesh and the target residual of each vertex includes:
  • the decoder determines N predicted texture coordinates of each vertex in the target three-dimensional grid, where N is a positive integer greater than 1;
  • the decoder performs a first shift operation on the N predicted texture coordinates of each vertex to obtain N fourth target texture coordinates of each vertex; the bit length corresponding to the fourth target texture coordinate is greater than the bit length corresponding to the predicted texture coordinate;
  • the decoder determines the real texture coordinate of each vertex based on the N fourth target texture coordinates of each vertex and the target residual of each vertex.
  • the decoder may use multiple decoded triangles to predict vertices to determine the N predicted texture coordinates of each vertex in the target three-dimensional grid.
  • determining the N predicted texture coordinates of each vertex in the target three-dimensional mesh includes:
  • the decoding end selects the first edge in the edge set, and determines the triangle corresponding to the first edge and the triangle with the vertex to be decoded as the opposite vertex and not including the first edge as the target triangle; the target triangle
  • the vertices other than the vertices to be decoded are decoded vertices, and the pair of vertices of the first side in the triangle corresponding to the first side are the vertices to be decoded;
  • the decoding end obtains the predicted texture coordinates of the vertex to be decoded in the target triangle.
  • determining the true texture coordinates of each vertex based on the N fourth target texture coordinates of each vertex and the target residual of each vertex includes:
  • the decoder For any vertex, the decoder performs a second shift operation on the N fourth target coordinates of the vertex to obtain N predicted texture coordinates of the vertex, and performs a second shift operation on the target residual of the vertex. Bit operations obtain the texture coordinate residual of the vertex;
  • the decoding end determines the target value corresponding to the N predicted texture coordinates of the vertex as the fifth target texture coordinate of the vertex;
  • the decoding end performs an addition operation on the fifth target texture coordinate of the vertex and the texture coordinate residual of the vertex to determine the real texture coordinate of the vertex.
  • any vertex perform a second shift operation on the N fourth target coordinates of the vertex to obtain N predicted texture coordinates of the vertex, and perform a second shift operation on the target residual of the vertex to obtain the vertex.
  • texture sit standard residual wherein the bit length corresponding to the predicted texture coordinate is less than the bit length corresponding to the fourth target coordinate, and the bit length corresponding to the texture coordinate residual is less than the bit length corresponding to the target residual.
  • the target value corresponding to the N predicted texture coordinates is determined as the fifth target texture coordinate, and the fifth target texture coordinate and the texture coordinate residual are added to determine the real texture coordinate.
  • determining the true texture coordinates of each vertex based on the N fourth target texture coordinates of each vertex and the target residual of each vertex includes:
  • the decoding end determines the target value corresponding to the N fourth target texture coordinates of the vertex as the sixth target texture coordinate of the vertex;
  • the decoding end performs an addition operation on the sixth target texture coordinate of the vertex and the target residual of the vertex to determine the seventh target texture coordinate of the vertex;
  • the decoding end performs a second shift operation on the seventh target texture coordinate of the vertex to determine the real texture coordinate of the vertex.
  • the target value corresponding to the N fourth target texture coordinates of the vertex is determined as the sixth target texture coordinate of the vertex.
  • an addition operation is performed on the sixth target texture coordinate of the vertex and the target residual to determine the seventh target texture coordinate.
  • a second shift operation is performed on the seventh target texture coordinate to determine the real texture coordinate of the vertex.
  • the bit length corresponding to the real texture coordinates is smaller than the bit length corresponding to the seventh target texture coordinates.
  • determining the real texture coordinates of each vertex based on the geometric information and connection relationships of the target three-dimensional mesh and the target residual of each vertex includes:
  • the decoder decodes and obtains the real texture coordinates of each vertex in the target three-dimensional grid.
  • the decoding to obtain the real texture coordinates of each vertex in the target three-dimensional mesh includes:
  • the decoding end selects the first edge in the edge set and obtains the predicted texture coordinates of the vertex to be decoded in the target triangle;
  • the target triangle is the triangle corresponding to the first edge, and the opposite vertex of the target triangle is the Describes the vertex to be decoded;
  • the decoding end performs a first shift operation on the predicted texture coordinates of the vertex to be decoded to obtain an eighth target texture coordinate of the vertex to be decoded; the bit length corresponding to the eighth target texture coordinate is greater than the predicted The bit length corresponding to the texture coordinate;
  • the decoding end determines the real texture coordinates of the vertex to be decoded based on the eighth target texture coordinate of the vertex to be decoded and the target residual of the vertex to be decoded.
  • determining the real texture coordinates of the vertex to be decoded based on the eighth target texture coordinate of the vertex to be decoded and the target residual of the vertex to be decoded includes:
  • the decoder performs a second shift operation on the eighth target texture coordinate of the vertex to be decoded to obtain the predicted texture coordinates of the vertex, and performs a second shift operation on the target residual of the vertex to be decoded to obtain the The texture coordinate residual of the vertex;
  • the decoding end performs an addition operation on the predicted texture coordinates of the vertex to be decoded and the texture coordinate residual of the vertex to be decoded to determine the real texture coordinates of the vertex to be decoded.
  • the predicted texture coordinates can be obtained by performing a second shift operation on the eighth target texture coordinate of the vertex to be decoded, and the texture coordinate residual obtained by performing the second shift operation on the target residual of the vertex to be decoded. Further, The above-mentioned predicted texture coordinate box texture coordinate residual is added to determine the real texture coordinates of the vertex to be decoded.
  • determining the real texture coordinates of the vertex to be decoded based on the eighth target texture coordinate of the vertex to be decoded and the target residual of the vertex to be decoded includes:
  • the decoding end performs an addition operation on the eighth target texture coordinate of the to-be-decoded vertex and the target residual of the to-be-decoded vertex to determine the ninth target texture coordinate of the to-be-decoded vertex;
  • the decoding end performs a second shift operation on the ninth target texture coordinate of the vertex to be decoded to determine the real texture coordinates of the vertex to be decoded.
  • the eighth target texture coordinate of the vertex to be decoded can be added to the target residual to determine the ninth target texture coordinate of the vertex to be decoded. Further, a second shift operation is performed on the ninth target texture coordinate. , determine the real texture coordinates of the vertex to be decoded.
  • obtaining the predicted texture coordinates of the vertex to be decoded in the target triangle includes:
  • the decoding end obtains the texture coordinates of the projection point of the vertex to be decoded on the first side according to the geometric coordinates of each vertex of the target triangle;
  • the decoding end obtains the predicted texture coordinates of the vertex to be decoded based on the texture coordinates of the projection point.
  • obtaining the texture coordinates of the projection point of the vertex to be decoded on the first side according to the geometric coordinates of each vertex of the target triangle includes:
  • the decoding end is based on The sum of N uv and N uv obtains the texture coordinates of the projection point of the vertex to be decoded on the first edge, or according to N uv and The difference value obtains the texture coordinates of the projection point of the vertex to be decoded on the first edge;
  • N uv is the texture coordinate of the vertex N on the first side of the target triangle
  • N uv is a vector from the vertex N on the first side of the target triangle to the texture coordinates of the predicted projection point X of the vertex to be decoded on the first side
  • the decoding end can obtain the texture coordinates of the projection point of the vertex to be decoded on the first side according to the first formula.
  • the first formula is or
  • X uv is the texture coordinate of the projection point of the vertex to be decoded on the first side
  • N uv is the texture coordinate of the vertex N on the first side of the target triangle
  • obtaining the predicted texture coordinates of the vertex to be decoded according to the texture coordinates of the projection point includes:
  • the decoding end determines the decoding end based on X uv and Obtain the texture coordinates of the vertex to be decoded; the first triangle and the target triangle have a common first side, and the opposite vertex of the first side of the first triangle is the first vertex O;
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be decoded on the first edge, is a vector from the predicted projection point X of the vertex to be decoded on the first edge to the texture coordinates of the vertex C to be decoded.
  • the decoding end can obtain the texture coordinates of the vertex to be decoded according to the second formula.
  • Pred C_NP is the predicted texture coordinate of the vertex to be decoded
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be decoded on the first edge
  • O uv is the texture coordinate of the first vertex corresponding to the first side of the target triangle
  • obtaining the predicted texture coordinates of the vertex to be decoded according to the texture coordinates of the projection point includes:
  • the decoding end reads the target identifier corresponding to the point to be decoded, X uv and Determine the texture coordinates of the vertex to be decoded; the first triangle and the target triangle have a common first side, and the opposite vertex of the first side of the first triangle is the first vertex O;
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be decoded on the first edge, is a vector from the predicted projection point X of the vertex to be decoded on the first edge to the texture coordinate C uv of the vertex to be decoded.
  • the decoding end may determine the texture coordinates of the vertex to be decoded based on the read target identifier corresponding to the point to be decoded and the third formula.
  • Pred C_NP is the predicted texture coordinate of the vertex to be decoded
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be decoded on the first edge; is a vector from the predicted projection point X of the vertex to be decoded on the first edge to the texture coordinate C uv of the vertex to be decoded
  • C uv is the texture coordinate of the vertex to be decoded
  • the target identifier is used to characterize the size relationship between
  • the embodiment of the present application is the reverse process of encoding.
  • the decoding block diagram is shown in Figure 6. That is, the decoding process of UV coordinates is to first decode the geometric information and connection relationships, and then decode the code stream based on the geometric information and connection relationships. Obtain the residual, then obtain the predicted UV coordinates, and finally use the residual and predicted UV coordinates to obtain the real UV coordinates to achieve decoding of the UV coordinates; the method of predicting UV coordinates in the embodiment of the present application can be See the description of the encoding side and will not go into details here.
  • the execution subject may be an encoding device.
  • the encoding device performing the encoding method is taken as an example to illustrate the encoding device provided by the embodiment of the present application.
  • this embodiment of the present application also provides an encoding device 700, which includes:
  • the reconstruction module 701 is used to reconstruct the geometric information and connection relationships of the target three-dimensional grid according to the coding results of the geometric information and connection relationships of the target three-dimensional grid;
  • the shift module 702 is configured to perform a first shift operation on the coordinates of each vertex in the target three-dimensional network according to the reconstructed geometric information and connection relationships to obtain the shift coordinates of each vertex; the shift coordinates correspond to The bit length is greater than the bit length corresponding to the coordinates;
  • An encoding module 703 is configured to encode the texture coordinates of each vertex in the target three-dimensional mesh based on the shift coordinate of each vertex.
  • the shift module 702 is specifically used for:
  • the first target bit is used to store the coordinates of the vertex, and the shifted coordinates of the vertex are obtained.
  • the shift module 702 is also specifically used to:
  • the encoding module 703 is specifically used for:
  • N N predicted texture coordinates of each vertex in the target three-dimensional grid, where N is a positive integer greater than 1;
  • the texture coordinate residual of each vertex is encoded; the texture coordinate residual of a vertex is determined based on the N predicted texture coordinates of that vertex.
  • the encoding module 703 is also specifically used for:
  • the vertices other than the vertex are coded vertices, and the opposite vertex of the first side in the triangle corresponding to the first side is the vertex to be coded;
  • For each target triangle obtain the predicted texture coordinates of the vertex to be encoded in the target triangle.
  • the encoding module 703 is also specifically used for:
  • a texture coordinate residual of the vertex is encoded, the texture coordinate residual being determined based on the true texture coordinate of the vertex and the second target texture coordinate of the vertex.
  • the encoding module 703 is also specifically used for:
  • the bit length corresponding to the second target texture coordinates is smaller than the first target texture coordinates. The corresponding bit length.
  • the encoding module 703 is also specifically used for:
  • the first target texture coordinate of the vertex is binary right-shifted using the second shift parameter.
  • the encoding module 703 is also specifically used for:
  • the encoding module 703 is also specifically used for:
  • Texture coordinates of each vertex in the target three-dimensional mesh are encoded based on the shift coordinates of each vertex.
  • the encoding module 703 is also specifically used for:
  • the target triangle is the triangle corresponding to the first edge, and the opposite vertex of the target triangle is the vertex to be encoded ;
  • a texture coordinate residual between the true texture coordinates of the vertex to be encoded and the predicted texture coordinates of the vertex to be encoded is encoded.
  • the encoding module 703 is also specifically used for:
  • the encoding module 703 is also specifically used for:
  • the encoding module 703 is also specifically used for:
  • the predicted texture coordinates of the vertex to be encoded are obtained.
  • the encoding module 703 is also specifically used for:
  • N uv and N uv obtains the texture coordinates of the projection point of the vertex to be encoded on the first edge, or according to N uv and The difference value obtains the texture coordinates of the projection point of the vertex to be encoded on the first edge;
  • N uv is the texture coordinate of the vertex N on the first side of the target triangle
  • N uv is a vector from the vertex N on the first side of the target triangle to the texture coordinates of the predicted projection point X of the vertex to be encoded on the first side
  • N uv is the texture coordinate of the vertex N on the first side of the target triangle
  • the encoding module 703 is also specifically used for:
  • the first vertex O corresponding to the first edge is a coded vertex, or the first triangle is not a degenerate triangle, according to X uv and Obtain the texture coordinates of the vertex to be encoded; the first triangle and the target triangle have a common first side, and the opposite vertex of the first side of the first triangle is the first vertex O;
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be encoded on the first edge, is a vector from the predicted projection point X of the vertex to be encoded on the first edge to the texture coordinates of the vertex C to be encoded.
  • the encoding module 703 is also specifically used for:
  • the first edge corresponding to the first vertex O is an uncoded vertex, or the first triangle is a degenerate triangle, according to X uv and Obtain the texture coordinates of the vertex to be encoded, and encode the target identifier corresponding to the vertex to be encoded; the first triangle and the target triangle have a common first side, and the first side of the first triangle The pair of vertices is the first vertex O;
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be encoded on the first edge, is a vector from the predicted projection point X of the vertex to be encoded on the first edge to the texture coordinate C uv of the vertex to be encoded.
  • the geometric information and connection relationships of the target three-dimensional grid are reconstructed according to the coding results of the target three-dimensional grid.
  • each target three-dimensional network is reconstructed.
  • the coordinates of the vertices perform a first shift operation to obtain the shift coordinates of each vertex; based on the shift coordinates of each vertex, the texture coordinates of each vertex in the target three-dimensional grid are encoded.
  • the first shift operation is performed on the coordinates of each vertex in the target three-dimensional network to obtain the shifted coordinates of each vertex, in which the bit length corresponding to the shifted coordinate is greater than the bit length corresponding to the coordinate, so that Use more bits to store vertex shifts Coordinates, in the subsequent process of UV coordinate prediction based on shifted coordinates, the coordinate data is stored in a high-precision manner to avoid the prediction residual from increasing due to the loss of data accuracy, thereby improving the coding efficiency of texture coordinates.
  • This device embodiment corresponds to the above-mentioned encoding method embodiment shown in Figure 1.
  • Each implementation process and implementation method on the encoding end in the above-mentioned method embodiment can be applied to this device embodiment, and can achieve the same technical effect.
  • the execution subject may be a decoding device.
  • the decoding device performing the decoding method is taken as an example to illustrate the decoding device provided by the embodiment of the present application.
  • this embodiment of the present application also provides a decoding device 800, which includes:
  • Decoding module 801 is used to decode the obtained code stream corresponding to the target three-dimensional grid to obtain the geometric information and connection relationship of the target three-dimensional grid, and to decode the obtained code stream corresponding to each vertex to obtain the geometric information of each vertex. Texture coordinate residual;
  • Shift module 802 is configured to perform a first shift operation on the texture coordinate residual of each vertex to obtain a target residual of each vertex; the bit length corresponding to the target residual is greater than the texture coordinate residual. The bit length corresponding to the difference;
  • Determining module 803 is configured to determine the real texture coordinates of each vertex based on the geometric information and connection relationships of the target three-dimensional mesh and the target residual of each vertex.
  • the shift module 802 is specifically used for:
  • the third target bit is used to store the texture coordinate residual of the vertex to obtain the target residual of the vertex.
  • the shift module 802 is also specifically used to:
  • a binary left shift is performed on the texture coordinate residual of the vertex using the first shift parameter.
  • the determination module 803 is specifically used to:
  • N N predicted texture coordinates of each vertex in the target three-dimensional grid, where N is a positive integer greater than 1;
  • the true texture coordinates of each vertex are determined based on the N fourth target texture coordinates of each vertex and the target residual of each vertex.
  • the determination module 803 is also specifically used to:
  • For each target triangle obtain the predicted texture coordinates of the vertex to be decoded in the target triangle.
  • the determination module 803 is also specifically used to:
  • An addition operation is performed on the fifth target texture coordinate of the vertex and the texture coordinate residual of the vertex to determine the real texture coordinate of the vertex.
  • the determination module 803 is also specifically used to:
  • a second shift operation is performed on the seventh target texture coordinate of the vertex to determine the real texture coordinate of the vertex.
  • the determination module 803 is also specifically used to:
  • the determination module 803 is also specifically used to:
  • the target triangle is the triangle corresponding to the first edge, and the opposite vertex of the target triangle is the vertex to be decoded.
  • bit length corresponding to the eighth target texture coordinate is greater than the bit length corresponding to the predicted texture coordinates.
  • the real texture coordinates of the vertex to be decoded are determined based on the eighth target texture coordinate of the vertex to be decoded and the target residual of the vertex to be decoded.
  • the determination module 803 is also specifically used to:
  • An addition operation is performed on the predicted texture coordinates of the vertex to be decoded and the texture coordinate residual of the vertex to be decoded to determine the real texture coordinates of the vertex to be decoded.
  • the determination module 803 is also specifically used to:
  • a second shift operation is performed on the ninth target texture coordinate of the vertex to be decoded to determine the real texture coordinates of the vertex to be decoded.
  • the determination module 803 is also specifically used to:
  • the predicted texture coordinates of the vertex to be decoded are obtained.
  • the determination module 803 is also specifically used to:
  • N uv and N uv obtains the texture coordinates of the projection point of the vertex to be decoded on the first edge, or according to N uv and The difference value obtains the texture coordinates of the projection point of the vertex to be decoded on the first edge;
  • N uv is the texture coordinate of the vertex N on the first side of the target triangle
  • N uv is a vector from the vertex N on the first side of the target triangle to the texture coordinates of the predicted projection point X of the vertex to be decoded on the first side
  • the determination module 803 is also specifically used to:
  • the first vertex O corresponding to the first edge is a decoded vertex, or the first triangle is not a degenerate triangle, according to X uv and Obtain the texture coordinates of the vertex to be decoded; the first triangle and the target triangle have a common first side, and the opposite vertex of the first side of the first triangle is the first vertex O;
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be decoded on the first edge, is a vector from the predicted projection point X of the vertex to be decoded on the first edge to the texture coordinates of the vertex C to be decoded.
  • the determination module 803 is also specifically used to:
  • the first edge corresponding to the first vertex O is an undecoded vertex, or the first triangle is a degenerate triangle, according to the read target identifier, X uv and Determine the texture coordinates of the vertex to be decoded; the first triangle and the target triangle have a common first side, and the opposite vertex of the first side of the first triangle is the first vertex O;
  • X uv is the texture coordinate of the predicted projection point X of the vertex to be decoded on the first edge, is a vector from the predicted projection point X of the vertex to be decoded on the first edge to the texture coordinate C uv of the vertex to be decoded.
  • the obtained code stream corresponding to the target three-dimensional grid is decoded to obtain the geometric information and connection relationship of the target three-dimensional grid
  • the obtained code stream corresponding to each vertex is decoded to obtain the texture coordinate residual of each vertex.
  • the first shift operation is performed on the texture coordinate residual of each vertex to obtain the target residual of each vertex, where the bit length corresponding to the target residual is greater than the bit length corresponding to the texture coordinate residual, In this way, more bits are used to store the target residual.
  • coordinate data is stored in a high-precision manner to avoid prediction residuals due to loss of data accuracy. And increases, thereby improving the decoding efficiency of texture coordinates.
  • the decoding device provided by the embodiment of the present application can implement each process implemented by the method embodiment in Figure 5 and achieve the same technical effect. To avoid duplication, details will not be described here.
  • the encoding device and the decoding device in the embodiment of the present application may be electronic equipment, such as an electronic equipment with an operating system, or may be components in the electronic equipment, such as integrated circuits or chips.
  • the electronic device may be a terminal or other devices other than the terminal.
  • terminals may include but are not limited to the types of terminals listed above, and other devices may be servers, network attached storage (Network Attached Storage, NAS), etc., which are not specifically limited in the embodiments of this application.
  • this embodiment of the present application also provides a communication device 900, which includes a processor 901 and a memory 902.
  • the memory 902 stores programs or instructions that can be run on the processor 901, such as , when the communication device 900 is a terminal, when the program or instruction is executed by the processor 901, each step of the above encoding method embodiment is implemented and the same technical effect can be achieved, or each step of the above decoding method embodiment is realized and can achieve the same technical effect.
  • An embodiment of the present application also provides a terminal, including a processor 901 and a communication interface.
  • the processor 901 is configured to perform the following operations:
  • Texture coordinates of each vertex in the target three-dimensional mesh are encoded based on the shift coordinates of each vertex.
  • processor 901 is configured to perform the following operations:
  • the real texture coordinates of each vertex are determined.
  • FIG. 10 is a schematic diagram of the hardware structure of a terminal that implements an embodiment of the present application.
  • the terminal 1000 includes but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010 and other components. .
  • the terminal 1000 may also include a power supply (such as a battery) that supplies power to various components.
  • the power supply may be logically connected to the processor 1010 through a power management system, thereby managing charging, discharging, and power consumption through the power management system. Management and other functions.
  • the terminal structure shown in FIG. 10 does not constitute a limitation on the terminal.
  • the terminal may include more or fewer components than shown in the figure, or some components may be combined or arranged differently, which will not be described again here.
  • the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042.
  • the graphics processor 10041 is responsible for the image capture device (GPU) in the video capture mode or the image capture mode. Process the image data of still pictures or videos obtained by cameras (such as cameras).
  • the display unit 1006 may include a display panel 10061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072 .
  • Touch panel 10071 also known as touch screen.
  • the touch panel 10071 may include a touch detection device and Touch controller has two parts.
  • Other input devices 10072 may include but are not limited to physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be described again here.
  • the radio frequency unit 1001 after receiving downlink data from the network side device, the radio frequency unit 1001 can transmit it to the processor 1010 for processing; the radio frequency unit 1001 can send uplink data to the network side device.
  • the radio frequency unit 1001 includes, but is not limited to, an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, etc.
  • Memory 1009 may be used to store software programs or instructions as well as various data.
  • the memory 1009 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required for at least one function (such as a sound playback function, Image playback function, etc.) etc.
  • memory 1009 may include volatile memory or nonvolatile memory, or memory 1009 may include both volatile and nonvolatile memory.
  • non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory.
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (Synch link DRAM) , SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DRRAM).
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • synchronous dynamic random access memory Synchronous DRAM, SDRAM
  • Double data rate synchronous dynamic random access memory Double Data Rate SDRAM, DDRSDRAM
  • enhanced SDRAM synchronous dynamic random access memory
  • Synch link DRAM synchronous link dynamic random access memory
  • SLDRAM direct memory bus random access memory
  • Direct Rambus RAM Direct Rambus RAM
  • the processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor and a modem processor, where the application processor mainly handles operations related to the operating system, user interface, application programs, etc., Modem processors mainly process wireless communication signals, such as baseband processors. It can be understood that the above modem processor may not be integrated into the processor 1010.
  • the processor 1010 is used to perform the following operations:
  • Texture coordinates of each vertex in the target three-dimensional mesh are encoded based on the shift coordinates of each vertex.
  • processor 1010 is configured to perform the following operations:
  • the real texture coordinates of each vertex are determined.
  • Embodiments of the present application also provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the program or instructions are executed by a processor, each process of the above encoding method embodiment is implemented, or the above decoding method is implemented.
  • Each process of the embodiment can achieve the same technical effect, so to avoid repetition, it will not be described again here.
  • the processor is the processor in the terminal described in the above embodiment.
  • the readable storage medium includes computer readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
  • An embodiment of the present application further provides a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement each of the above encoding method embodiments.
  • the process, or each process of implementing the above decoding method embodiment, can achieve the same technical effect. To avoid repetition, it will not be described again here.
  • chips mentioned in the embodiments of this application may also be called system-on-chip, system-on-a-chip, system-on-chip or system-on-chip, etc.
  • Embodiments of the present application further provide a computer program/program product.
  • the computer program/program product is stored in a storage medium.
  • the computer program/program product is executed by at least one processor to implement the above encoding method embodiment.
  • Each process, or each process that implements the above decoding method embodiment, can achieve the same technical effect. To avoid duplication, it will not be described again here.
  • Embodiments of the present application further provide a system.
  • the system includes an encoding end and a decoding end.
  • the encoding end executes each process of the above-mentioned encoding method embodiment.
  • the encoding end executes each process of the above-mentioned decoding method embodiment, and can achieve the same technical effect, so to avoid repetition, we will not repeat them here.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
  • the program can be stored in a computer-readable storage medium.
  • the program can be stored in a computer-readable storage medium.
  • the process may include the processes of the embodiments of each of the above methods.
  • the storage medium can be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present application can be embodied in the form of a computer software product that is essentially or contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk , CD), including several instructions to cause a terminal (which can be a mobile phone, computer, server, air conditioner, or network device, etc.) to execute the methods described in various embodiments of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Image Generation (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

本申请公开了一种编码、解码方法、装置及设备,涉及编解码技术领域。该编码方法包括:编码端根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;编码端根据重建的几何信息以及连接关系,对目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;移位坐标对应的比特位长度大于坐标的对应比特位长度;基于每个顶点的移位坐标,对目标三维网格中每个顶点的纹理坐标进行编码。

Description

编码、解码方法、装置及设备
相关申请的交叉引用
本申请主张在2022年7月21日在中国提交的中国专利申请No.202210865721.3的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于编解码技术领域,具体涉及一种编码、解码方法、装置及设备。
背景技术
纹理坐标,又称为UV坐标是一种描述三维网格顶点纹理的信息。三维网格先将表面纹理进行二维投影,形成一张二维的纹理图。UV坐标表示三维的顶点纹理所处二维纹理图中位置,且与几何信息一一对应。因此,纹理坐标决定了三维网格的纹理贴图,是三维网格的重要组成部分。
在UV坐标的预测过程中,往往使用低精度的方式存储数据,这导致在UV坐标预测的过程中丢失了数据精度,使得预测残差由于数据精度的丢失而增大,从而降低了纹理坐标的编码效率。
发明内容
本申请实施例提供一种编码、解码方法、装置及设备,能够解决相关技术中预测残差由于数据精度的丢失而增大,从而降低了纹理坐标的编码效率的问题。
第一方面,提供了一种编码方法,包括:
编码端根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;
所述编码端根据重建的几何信息以及连接关系,对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;所述移位坐标对应的比特位长度大于所述坐标对应的比特位长度;
所述编码端基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
第二方面,提供了一种解码方法,包括:
解码端解码获取的与目标三维网格对应的码流得到所述目标三维网格的几何信息和连接关系,解码获取的与每个顶点对应的码流得到所述每个顶点的纹理坐标残差;
所述解码端对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标 残差;所述目标残差对应的比特位长度大于所述纹理坐标残差对应的比特位长度;
所述解码端基于所述目标三维网格的几何信息和连接关系,以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
第三方面,提供了一种编码装置,包括:
重建模块,用于根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;
移位模块,用于根据重建的几何信息以及连接关系,对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;所述移位坐标对应的比特位长度大于所述坐标对应的比特位长度;
编码模块,用于基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
第四方面,提供了一种解码装置,包括:
解码模块,用于解码获取的与目标三维网格对应的码流得到所述目标三维网格的几何信息以及连接关系,解码获取的与每个顶点对应的码流得到所述每个顶点的纹理坐标残差;
移位模块,用于对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差;所述目标残差对应的比特位长度大于所述纹理坐标残差对应的比特位长度;
确定模块,用于基于所述目标三维网格的几何信息以及连接关系以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
第五方面,提供了一种终端,该终端包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第六方面,提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第七方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法,或者实现如第二方面所述的方法。
第八方面,提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第九方面,提供了一种系统,所述系统包括编码端和解码端,所述编码端执行如第一方面所述的方法的步骤,所述解码端执行如第二方面所述的方法的步骤。
在本申请实施例中,根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;根据重建的几何信息以及连接关系,对目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;基于每个顶点的移位 坐标,对目标三维网格中每个顶点的纹理坐标进行编码。上述方案中,对目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标,其中,移位坐标对应的比特位长度大于坐标对应的比特位长度,以此使用比特位数更多的比特位存储顶点的移位坐标,在后续的基于移位坐标进行的UV坐标预测的过程中,使用高精度的方式存储坐标数据,避免预测残差由于数据精度的丢失而增大,进而提高纹理坐标的编码效率。
附图说明
图1是本申请实施例提供的编码方法的流程示意图;
图2是本申请实施例提供的搜索目标三角形的示意图;
图3是本申请实施例提供的预测原理的几何示意图;
图4是本申请实施例提供的UV坐标编码框架示意图;
图5是本申请实施例提供的解码方法的流程示意图;
图6是本申请实施例提供的UV坐标解码框架示意图;
图7是本申请实施例提供的编码装置的结构图;
图8是本申请实施例提供的解码装置的结构图;
图9是本申请实施例提供的通信设备的结构图;
图10是本申请实施例提供的终端的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的编码、解码方法、装置及设备进行详细地说明。
请参阅图1,图1是本申请实施例提供的编码方法的流程示意图。本实施例提供的编码方法包括以下步骤:
S101,编码端根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系。
需要说明的是,本申请中所说的目标三维网格可以理解为任意视频帧对应的三维网格,该目标三维网格的几何信息可以理解为是三维网格中顶点的坐标,该坐标通常指的是三维坐标;所述连接关系用于描述三维网格中顶点和面片等元素之间的连接关系,也可以称为连接性关系。
需要说明的是,本步骤中对顶点的纹理坐标进行编码时是依据几何信息以及连接关系,而为了保证编码后的纹理坐标与编码后的几何信息以及连接关系的一致性,本申请实施例中使用的是编码后再重建的几何信息和连接关系。
S102,所述编码端根据重建的几何信息以及连接关系,对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标。
本步骤中,对目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标,其中,移位坐标对应的比特位长度大于坐标对应的比特位长度,以此将低精度比特位长的坐标转换至高精度比特位长的坐标。
具体的如何对每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标的实施方式,请参阅后续实施例。
S103,所述编码端基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
在本申请实施例中,根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;根据重建的几何信息以及连接关系,对目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;基于每个顶点的移位坐标,对目标三维网格中每个顶点的纹理坐标进行编码。上述方案中,对目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标,其中,移位坐标对应的比特位长度大于坐标对应的比特位长度,使用比特位数更多的比特位存储顶点的移位坐标,在后续的基于移位坐标进行的UV坐标预测的过程中,使用高精度的方式存储坐标数据,避免预测残差由于数据精度的丢失而增大,进而提高纹理坐标的编码效率。
可选地,所述对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标包括:
所述编码端对于所述目标三维网格中的任一顶点,增加所述顶点的坐标占用的比特位数,获得所述顶点的坐标对应的第一目标比特位;
所述编码端使用所述第一目标比特位存储所述顶点的坐标,得到所述顶点的移位坐标。
本实施例中,对于目标三维网格中的任一顶点而言,上述第一移位操作用于增加该顶点的坐标占用的比特位数。
可选地,所述增加所述顶点的坐标占用的比特位数包括:
所述编码端使用第一移位参数对所述顶点的坐标进行二进制左移。
另一种可选地实施方式为,在该顶点的坐标所占用的比特位的基础上,增加第一预设数量的比特位,得到第一目标比特位。
将增加位数后的比特位称为第一目标比特位,使用该第一目标比特位存储顶点的坐标,可以存储更多的坐标数据,得到该顶点的移位坐标。
上述使用第一移位参数对所述顶点的坐标进行二进制左移的具体实施方式为:
在目标三维网格中建立一个直角坐标系,通过该坐标系表征每个顶点的坐标。可以使用参数对目标三维网络中每个顶点的坐标进行二进制左移,得到每个顶点的移位坐标,具体而言,请参阅公式C′uvx=Cuvx<<leftshift,其中,Cuvx为坐标,leftshift为移位参数,C′uvx为移位坐标。
示例性的,Cuvx的比特位长度为M,leftshift表征左移K1位,则C′uvx的比特位长度为M+K1,其中,K1为正整数。
可选地,所述基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码包括:
所述编码端基于所述每个顶点的移位坐标,确定所述目标三维网格中每个顶点的N个预测纹理坐标,N为大于1的正整数;
所述编码端编码每个顶点的纹理坐标残差;顶点的纹理坐标残差基于所述顶点的N个预测纹理坐标确定。
本实施例中,编码端可以采用多个已编码三角形预测顶点的方式,对目标三维网格中每个顶点的纹理坐标进行编码。
本实施例中采用多个已编码三角形预测顶点的方式,确定每个顶点的N个预测纹理坐标,通过使用多个已编码三角形预测顶点的纹理坐标,以此提高UV坐标数据量的压缩效果。
具体的如何采用多个已编码三角形预测顶点的方式,确定每个顶点的N个预测纹理坐标的实施方式,请参阅后续实施例。
对于任意一个顶点,可以基于该顶点的N个预测纹理坐标确定该顶点的纹理坐标残差,进而编码该顶点的纹理坐标残差,具体的如何编码每个顶点的纹理坐标残差的实施方式,请参阅后续实施例。
可选地,所述基于所述每个顶点的移位坐标,确定所述目标三维网格中每个顶点的N个预测纹理坐标包括:
所述编码端在边集合中选择第一边,将所述第一边对应的三角形以及以待编码顶点为对顶点且不包括所述第一边的三角形,确定为目标三角形;所述目标三角形中除待编码顶点之外的顶点为已编码顶点,所述第一边对应的三角形中所述第一边的对顶点为待编码顶点;
所述编码端对于每个目标三角形,获取待编码顶点在目标三角形中的预测纹理坐标。
需要说明的是,在进行编码之前,需要先获取初始的边集合,具体的,初始的边集合的获取方式为:
所述在边集合中选择第一边之前,所述方法还包括:
所述编码端根据重建的几何信息以及连接关系,选取一个初始三角形;
所述编码端对所述初始三角形的三个顶点的纹理坐标进行编码,并将所述初始三角形的三条边存入边集合。
需要说明的是,对于初始三角形,本申请实施例中不对其进行顶点的预测,而是直接编码纹理坐标。可选地,对于初始三角形,本申请实施例中可以直接编码初始三角形第一个顶点的纹理坐标;使用第一个顶点的纹理坐标预测边,进而得到初始三角形第二个顶点的纹理坐标;使用相似三角形预测编码的方式,得到初始三角形第三个顶点的纹理坐标。
在编码初始三角形的各顶点的纹理坐标之后,便将该初始三角形的各个边存储边集合,形成初始的边集合,然后基于该初始的边集合对后面的顶点进行预测。
为便于理解,请参阅图2,如图2所示,图2包括三个三角形,顶点C、顶点N和顶点P.O构成的第一三角形,顶点C、顶点N和顶点P构成的第二三角形以及顶点C、顶点P和顶点N.O构成的第三三角形,且顶点N、顶点P、顶点P.O和顶点N.O均为未编码顶点。
若顶点C为待编码顶点,第一边对应的顶点为顶点N和顶点P,则将第一边对应的三角形,也就是第二三角形确定为目标三角形。进一步的,以顶点C为中心旋转搜索除顶点C之外其余两个顶点均为已编码顶点,且不包括第一边的三角形,将上述三角形确定为目标三角形;也就是说,将第一三角形和第二三角形均确定为目标三角形。
应理解,目标三角形中除待编码顶点之外的顶点为已编码顶点,目标三角形的数量大于1,可选地,多个目标三角形为相邻的三角形,或者,多个目标三角形之间不相邻。
可选地,所述编码每个顶点的纹理坐标残差包括:
所述编码端对于任意一个顶点,将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第一目标纹理坐标;
所述编码端对所述顶点的第一目标纹理坐标执行第二移位操作,得到第二目标纹理坐标;
所述编码端编码所述顶点的纹理坐标残差,所述纹理坐标残差基于所述顶点的真实纹理坐标和所述顶点的第二目标纹理坐标确定。
本实施例中,可以对上述N个预测纹理坐标进行加权求和,将加权求和得到的目标值确定为该顶点的第一目标纹理坐标,其中,当每个预测纹理坐标对应的权重相同时,将N个预测纹理坐标的平均值确定为该顶点的第一目标纹理坐标。应理解,在其他实施例中,并不限定通过对N个预测纹理坐标进行加权求和的方式得到目标值,还可以通过其他运算方法计算得到N个预测纹理坐标对应的目标值,在此不做具体限定。
为便于理解,请结合图3,在图3示出的场景中,目标三角形的数量为3,则待编码顶点对应3个预测纹理坐标,分别为PredC_NP、PredC_PON以及PredC_PNO。其中,PredC_NP为待编码顶点在第二三角形中对应的预测纹理坐标;PredC_PON为待编码顶点在第一三角形中对应的预测纹理坐标;PredC_PNO为待编码顶点在第三三角形中对应的预测纹理坐标。 可选地,将PredC_NP、PredC_PON以及PredC_PNO的平均值确定为待编码顶点的第一目标纹理坐标。
可选地,所述对所述顶点的第一目标纹理坐标执行第二移位操作,得到第二目标纹理坐标包括:
所述编码端减少所述顶点的第一目标纹理坐标占用的比特位数,获得所述顶点的第一目标纹理坐标对应的第二目标比特位;
所述编码端使用所述第二目标比特位存储所述顶点的第一目标纹理坐标,得到所述顶点的第二目标纹理坐标;所述第二目标纹理坐标对应的比特位长度小于所述第一目标纹理坐标对应的比特位长度。
本实施例中,在得到待编码顶点的第一目标纹理坐标之后,通过第二移位操作减少顶点的第一目标纹理坐标占用的比特位数。
可选地,所述减少所述顶点的第一目标纹理坐标占用的比特位数包括:
所述编码端使用第二移位参数对所述顶点的第一目标纹理坐标进行二进制右移。
另一种可选地实施方式为,在该顶点的第一目标纹理坐标所占用的比特位的基础上,减少第二预设数量的比特位,得到第二目标比特位。
将减少位数后的比特位称为第二目标比特位,使用该第二目标比特位存储顶点的第一目标纹理坐标,得到该顶点的第二目标纹理坐标。
上述使用第二移位参数对所述顶点的第一目标纹理坐标进行二进制右移的具体实施方式为:
请参阅公式A’=(Abs(A)+offset)>>rightshift,其中,rightshift为移位参数,A为第一目标纹理坐标,A’为第二目标纹理坐标,Abs为取绝对值操作,offset参数与移位参数表征的平移的比特位数相关。
示例性的,第一目标纹理坐标的比特位长度为M+K2,rightshift表征右移K2位,则第二目标纹理坐标的比特位长度为M,其中,K2为正整数。
在得到第二目标纹理坐标之后,根据该第二目标纹理坐标以及真实纹理坐标获取待编码顶点之间的残差,通过对残差进行编码实现对待编码顶点的编码,以此能够减少纹理坐标编码的比特数。
可选地,所述编码每个顶点的纹理坐标残差包括:
所述编码端对于任意一个顶点,将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第一目标纹理坐标;
所述编码端基于所述顶点的真实纹理坐标和所述顶点的第一目标纹理坐标,确定所述顶点的纹理坐标残差;
所述编码端对执行第二移位操作后的纹理坐标残差进行编码。
本实施例中,对于任意一个顶点,确定该顶点的第一目标纹理坐标值,具体的实施方式与上述实施例一致,在此不做重复阐述。
在得到第一目标纹理坐标之后,根据该第一目标纹理坐标以及真实纹理坐标获取待编码顶点之间的残差,确定顶点的纹理坐标残差。可选地,可以将第一目标纹理坐标与真实纹理坐标做减法运算,得到待编码顶点的纹理坐标残差。
本实施例中,在得到纹理坐标残差之后,对纹理坐标残差执行第二移位操作,并对执行第二移位操作后的纹理坐标残差进行编码。其中,对纹理坐标残差执行第二移位操作的实施方式与上述执行第二移位操作的实施方式一致,在此不做重复阐述。
可选地,所述基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码包括:
所述编码端基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
需要说明的是,本申请实施例中采用相似三角形预测顶点的方式进行顶点的纹理坐标的编码,能够保证预测顶点的准确性,进而能够保证纹理坐标编码的准确性。
可选地,所述基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码包括:
所述编码端在边集合中选择第一边,获取待编码顶点在目标三角形中的预测纹理坐标;所述目标三角形为所述第一边对应的三角形,且所述目标三角形的对顶点为所述待编码顶点;
所述编码端编码所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标之间的纹理坐标残差。
需要说明的是,在进行编码之前,需要先获取初始的边集合,具体的,初始的边集合的获取方式为:
所述编码端根据重建的几何信息以及连接关系,选取一个初始三角形,所述编码端对所述初始三角形的三个顶点的纹理坐标进行编码,并将所述初始三角形的三条边存入边集合。
需要说明的是,对于初始三角形,本申请实施例中不对其进行顶点的预测,而是直接编码纹理坐标,在编码初始三角形的各顶点的纹理坐标之后,便将该初始三角形的各个边存储边集合,形成初始的边集合,然后基于该初始的边集合对后面的顶点进行预测。
需要说明的是,在得到待编码顶点的预测纹理坐标之后,便可以根据该预测纹理坐标以及真实纹理坐标获取待编码顶点的残差,通过对残差进行编码实现对待编码顶点的编码,以此能够减少纹理坐标编码的比特数。
需要说明的是,该残差可以为所述待编码顶点的真实纹理坐标与目标三角形的对顶点的预测纹理坐标的差值,可以是待编码顶点的真实纹理坐标减去目标三角形的对顶点的预测纹理坐标得到,也可以是目标三角形的对顶点的预测纹理坐标减去待编码顶点的真实纹理坐标得到。
可选地,所述编码所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标 之间的纹理坐标残差包括:
所述编码端对所述待编码顶点的预测纹理坐标执行第二移位操作,得到待编码顶点的第三目标纹理坐标;
所述编码端编码所述待编码顶点的真实纹理坐标和所述待编码顶点的第三目标纹理坐标之间的纹理坐标残差。
本实施例中,可以对待编码顶点的预测纹理坐标执行第二移位操作,得到待编码顶点的第三目标纹理坐标。应理解,对预测纹理坐标执行第二移位操作的具体实施方式与上述实施例中执行第二移位操作的实施方式一致,在此不做重复阐述。
在得到第三目标纹理坐标,基于待编码顶点的真实纹理坐标和待编码顶点的第三目标纹理坐标之间的纹理坐标残差,并对该纹理坐标残差进行编码。可选地,可以将真实纹理坐标与第三目标纹理坐标之间的差值确定为纹理坐标残差。
可选地,所述编码所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标的纹理坐标残差包括:
所述编码端基于所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标,确定所述待编码顶点的纹理坐标残差;
所述编码端编码执行第二移位操作后的纹理坐标残差。
本实施例中,可以基于待编码顶点的真实纹理坐标和待编码顶点的预测纹理坐标,确定所述待编码顶点的纹理坐标残差。进一步的,对该纹理坐标残差第二移位操作,并对执行第二移位操作后的纹理坐标残差进行编码。
应理解,对纹理坐标残差第二移位操作的具体实施方式与上述实施例中执行第二移位操作的实施方式一致,在此不做重复阐述。
可选地,所述获取待编码顶点在目标三角形中的预测纹理坐标包括:
所述编码端根据目标三角形的各个顶点的几何坐标,获取待编码顶点在第一边上的投影点的纹理坐标;
所述编码端根据所述投影点的纹理坐标,获取所述待编码顶点的预测纹理坐标。
本实施例中,对于任意一个目标三角形,可以根据该目标三角形的各个顶点的几何坐标,即目标三角形的三个顶点的几何坐标,获取待编码顶点在第一边上的投影点的纹理坐标,具体的实施方式请参阅后续实施例。
在获取到投影点的纹理坐标之后,根据该投影点的纹理坐标,获取待编码顶点的预测纹理坐标,具体的实施方式请参阅后续实施例。
可选地,所述根据目标三角形的各个顶点的几何坐标,获取待编码顶点在第一边上的投影点的纹理坐标,包括:
所述编码端根据与Nuv的和值获取待编码顶点在第一边上的投影点的纹理坐标,或者根据Nuv的差值获取待编码顶点在第一边上的投影点的纹理坐标;
其中,Nuv为所述目标三角形的所述第一边上的顶点N的纹理坐标,为所述目 标三角形的所述第一边上的顶点N至所述待编码顶点在所述第一边上的预测投影点X的纹理坐标的向量,为所述第一边上的预测投影点X至所述目标三角形的所述第一边上的顶点N的纹理坐标的向量。
本实施例中编码端可以根据第一公式获取待编码顶点在第一边上的投影点的纹理坐标。其中,第一公式为或者
Xuv为所述待编码顶点在所述第一边上的投影点的纹理坐标,Nuv为所述目标三角形的所述第一边上的顶点N的纹理坐标,为所述目标三角形的所述第一边上的顶点N至所述待编码顶点在所述第一边上的预测投影点X的纹理坐标的向量,为所述第一边上的预测投影点X至所述目标三角形的所述第一边上的顶点N的纹理坐标的向量; 为所述第一边上的顶点N至顶点P的几何坐标的向量;为所述第一边上的顶点N至待编码顶点的几何坐标CG的向量;为所述第一边上的顶点N至顶点P的纹理坐标的向量;为所述第一边上的顶点N至顶点P的几何坐标的向量。
为便于理解,请参阅图3,如图3所示,边NP为在边集合中选出的一条边,可以看作是上述的第一边,顶点N和顶点P分别为第一边的两个顶点,顶点C为待编码的顶点,顶点N、顶点P和顶点C便构成了上述的目标三角形,点X为顶点C在NP边上的投影,顶点O为已编码点,且顶点O、顶点N和顶点P构成的三角形与顶点N、顶点P和顶点C构成的三角形共NP边。本实施例中,可以基于上述第一公式,获取待编码点的第一边上的投影点的纹理坐标。
可选地,所述根据所述投影点的纹理坐标,获取所述待编码顶点的预测纹理坐标,包括:
所述编码端在所述第一边对应的第一顶点O为已编码顶点,或者第一三角形不为退化三角形的情况下,根据Xuv获取所述待编码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
其中,Xuv为所述待编码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待编码顶点在所述第一边上的预测投影点X至待编码顶点C的纹理坐标的向量。
本实施例中,可以根据第二公式获取所述待编码顶点的纹理坐标,其中,第二公式为
PredC_NP为所述待编码顶点的预测纹理坐标,Xuv为所述待编码顶点在所述第一边上的预测投影点X的纹理坐标;为所述待编码顶点在所述第一边上的预测投影点X至待编码顶点的纹理坐标Cuv的向量;Ouv为所述目标三角形的所述第一边对应的第一顶点的纹理坐标;为所述待编码顶点在所述第一边上的预测投影点X至所述待编码顶点的纹理坐标Cuv的向量;
可选地,其中,为所述第一边上的预测投影点X至待编码顶点的几何坐标CG的向量,为所述第一边上的顶点N至顶点P的几何坐 标的向量,为所述第一边上的顶点N至顶点P的纹理坐标的向量,Rotated表示对向量进行90度旋转。
本实施例中,请结合图3,在第一顶点O为已编码顶点,或者第一三角形不为退化三角形的情况下,可以使用第一顶点O的纹理坐标,基于上述第二公式,获取待编码顶点的预测纹理坐标。
应理解,上述第一三角形即图3中顶点N、顶点P和顶点O构成的三角形,在顶点O位于顶点N和顶点P组成的第一边的情况下,第一三角形的面积为0,确定第一三角形为退化三角形。
可选地,所述根据所述投影点的纹理坐标,获取所述待编码顶点的预测纹理坐标,包括:
所述编码端在所述第一边对应第一顶点O为未编码顶点,或者第一三角形为退化三角形的情况下,根据Xuv获取所述待编码顶点的纹理坐标,并编码所述待编码顶点对应的目标标识;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
其中,Xuv为所述待编码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待编码顶点在所述第一边上的预测投影点X至待编码顶点的纹理坐标Cuv的向量。
本实施例中,可以根据第三公式获取所述待编码顶点的纹理坐标,并编码所述待编码顶点对应的目标标识。
其中,所述第三公式为
PredC_NP为所述待编码顶点的预测纹理坐标,Xuv为所述待编码顶点在所述第一边上的预测投影点X的纹理坐标;为所述待编码顶点在所述第一边上的预测投影点X至待编码顶点的纹理坐标Cuv的向量;Cuv为所述待编码顶点的纹理坐标;为所述待编码顶点在所述第一边上的预测投影点X至所述待编码顶点的纹理坐标Cuv的向量;所述目标标识用于表征|distance3|与|distance4|之间的大小关系。
本实施例中,请结合图3,在第一顶点O为未编码顶点,或者第一三角形为退化三角形的情况下,可以基于上述第三公式,获取待编码顶点的预测纹理坐标。
应理解,上述目标标识用于表征|distance3|与|distance4|之间的大小关系。例如,可以设置目标标识等于0表示设置目标标识等于1表示
可选地,上述第一公式、第二公式和第三公式中涉及顶点N的计算部分可以使用顶点P替换。
例如,使用顶点P的纹理坐标替换顶点N的纹理坐标。
例如,将第一公式中的替换为为目标三角形的所述第一边上的顶点P 至所述待编码顶点在所述第一边上的预测投影点X的纹理坐标的向量。
本申请实施例的UV坐标编码框架如图4所示,总体编码流程为:
在三维网格的几何信息与连接关系都已编码的情况下,可以利用重建的几何信息与连接关系编码UV坐标;首先选定一个三角形作为初始三角形并直接编码坐标值;其次选定初始三角形的邻接三角形作为待编码三角形,使用相似三角形预测顶点的方式或者多个已编码三角形预测顶点方式,预测未编码顶点的UV坐标值,编码待编码顶点真实UV坐标与预测坐标值的差值并在新编码三角形中选定新的边去编码邻接三角形未编码顶点,不断迭代这一过程,完成对整个三维网格UV坐标的编码。
请参阅图5,图5是本申请实施例提供的解码方法的流程示意图。本实施例提供的解码方法包括以下步骤:
S501,解码端解码获取的与目标三维网格对应的码流得到所述目标三维网格的几何信息和连接关系,解码获取的与每个顶点对应的码流得到所述每个顶点的纹理坐标残差。
S502,所述解码端对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差。
应理解,本步骤中对每个顶点的纹理坐标残差执行第一移位操作的具体实施方式与上述实施例中执行第一移位操作的实施方式一致,在此不做重复阐述。
其中,目标残差对应的比特位长度大于纹理坐标残差对应的比特位长度。
S503,所述解码端基于所述目标三维网格的几何信息和连接关系,以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
本申请实施例中,解码获取的与目标三维网格对应的码流得到目标三维网格的几何信息和连接关系,解码获取的与每个顶点对应的码流得到每个顶点的纹理坐标残差;对每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差;基于目标三维网格的几何信息和连接关系,以及每个顶点的目标残差,确定每个顶点的真实纹理坐标。上述方案中,对每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差,其中,目标残差的对应比特位长度大于纹理坐标残差对应的比特位长度,以此使用比特位数更多的比特位存储目标残差,在后续的基于目标残差进行的UV坐标预测的过程中,使用高精度的方式存储坐标数据,避免预测残差由于数据精度的丢失而增大,进而提高纹理坐标的解码效率。
可选地,所述对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差包括:
所述解码端对于任一顶点,增加所述顶点的纹理坐标残差占用的比特位数,获得所述顶点的纹理坐标残差对应的第三目标比特位;
所述解码端使用所述第三目标比特位存储所述顶点的纹理坐标残差,得到所述顶点的目标残差。
本实施例中,对于任一顶点而言,上述第一移位操作用于增加该顶点的纹理坐标残差 占用的比特位数。
可选地,所述增加所述顶点的纹理坐标残差占用的比特位数包括:
所述解码端使用第一移位参数对所述顶点的纹理坐标残差进行二进制左移。
另一种可选地实施方式为,在该顶点的纹理坐标残差所占用的比特位的基础上,增加第三预设数量的比特位,得到第三目标比特位。
将增加位数后的比特位称为第三目标比特位,使用该第三目标比特位存储顶点的纹理坐标残差,可以存储更多的残差数据,得到该顶点的目标残差。
可选地,所述基于所述目标三维网格的几何信息和连接关系,以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标包括:
所述解码端确定所述目标三维网格中每个顶点的N个预测纹理坐标,N为大于1的正整数;
所述解码端对所述每个顶点的N个预测纹理坐标执行第一移位操作,得到所述每个顶点的N个第四目标纹理坐标;所述第四目标纹理坐标对应的比特位长度大于所述预测纹理坐标对应的比特位长度;
所述解码端基于所述每个顶点的N个第四目标纹理坐标和所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
本实施例中,解码端可以采用多个已解码三角形预测顶点的方式,确定目标三维网格中每个顶点的N个预测纹理坐标。
可选地,所述确定所述目标三维网格中每个顶点的N个预测纹理坐标包括:
所述解码端在边集合中选择第一边,将所述第一边对应的三角形以及以待解码顶点为对顶点且不包括所述第一边的三角形,确定为目标三角形;所述目标三角形中除待解码顶点之外的顶点为已解码顶点,所述第一边对应的三角形中所述第一边的对顶点为待解码顶点;
所述解码端对于每个目标三角形,获取待解码顶点在目标三角形中的预测纹理坐标。
可选地,所述基于所述每个顶点的N个第四目标纹理坐标和所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标包括:
所述解码端对于任意一个顶点,对所述顶点的N个第四目标坐标执行得到第二移位操作得到所述顶点的N个预测纹理坐标,对所述顶点的目标残差执行第二移位操作得到所述顶点的纹理坐标残差;
所述解码端将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第五目标纹理坐标;
所述解码端对所述顶点的第五目标纹理坐标与所述顶点的纹理坐标残差做加法运算,确定所述顶点的真实纹理坐标。
本实施例中,对于任意一个顶点,对该顶点的N个第四目标坐标执行得到第二移位操作得到顶点的N个预测纹理坐标,对顶点的目标残差执行第二移位操作得到顶点的纹理坐 标残差,其中,预测纹理坐标对应的比特位长度小于第四目标坐标对应的比特位长度,纹理坐标残差对应的比特位长度小于目标残差对应的比特位长度。
应理解,本实施例中执行第二移位操作的具体实施方式与上述实施例中执行第二移位操作的实施方式一致,在此不做重复阐述。
在得到N个预测纹理坐标后,将N个预测纹理坐标对应的目标值确定为第五目标纹理坐标,并对第五目标纹理坐标与纹理坐标残差做加法运算,确定真实纹理坐标。
可选地,所述基于所述每个顶点的N个第四目标纹理坐标和所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标包括:
所述解码端将所述顶点的N个第四目标纹理坐标对应的目标值确定为所述顶点的第六目标纹理坐标;
所述解码端对所述顶点的第六目标纹理坐标与所述顶点的目标残差做加法运算,确定所述顶点的第七目标纹理坐标;
所述解码端对所述顶点的第七目标纹理坐标执行第二移位操作,确定所述顶点的真实纹理坐标。
本实施例中,对于任意一个顶点,将该顶点的N个第四目标纹理坐标对应的目标值确定为该顶点的第六目标纹理坐标。在确定第六目标纹理坐标之后,对顶点的第六目标纹理坐标与目标残差做加法运算,确定第七目标纹理坐标。
进一步的,对第七目标纹理坐标执行第二移位操作,确定顶点的真实纹理坐标。其中,真实纹理坐标对应的比特位长度小于第七目标纹理坐标对应的比特位长度
应理解,本实施例中执行第二移位操作的具体实施方式与上述实施例中执行第二移位操作的实施方式一致,在此不做重复阐述。
可选地,所述基于所述目标三维网格的几何信息以及连接关系以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标包括:
所述解码端解码获取目标三维网格中每个顶点的真实纹理坐标。
可选地,所述解码获取目标三维网格中每个顶点的真实纹理坐标包括:
所述解码端在边集合中选择第一边,获取待解码顶点在目标三角形中的预测纹理坐标;所述目标三角形为所述第一边对应的三角形,且所述目标三角形的对顶点为所述待解码顶点;
所述解码端对所述待解码顶点的预测纹理坐标执行第一移位操作,得到所述待解码顶点的第八目标纹理坐标;所述第八目标纹理坐标对应的比特位长度大于所述预测纹理坐标对应的比特位长度;
所述解码端基于所述待解码顶点的第八目标纹理坐标和所述待解码顶点的目标残差,确定所述待解码顶点的真实纹理坐标。
可选地,所述基于所述待解码顶点的第八目标纹理坐标和所述待解码顶点的目标残差,确定所述待解码顶点的真实纹理坐标包括:
所述解码端对所述待解码顶点的第八目标纹理坐标执行第二移位操作得到所述顶点的预测纹理坐标,对所述待解码顶点的目标残差执行第二移位操作得到所述顶点的纹理坐标残差;
所述解码端对所述待解码顶点的预测纹理坐标与所述待解码顶点的纹理坐标残差做加法运算,确定所述待解码顶点的真实纹理坐标。
本实施例中,可以对待解码顶点的第八目标纹理坐标执行第二移位操作得到预测纹理坐标,对待解码顶点的目标残差执行第二移位操作得到的纹理坐标残差,进一步的,对上述预测纹理坐标盒纹理坐标残差做加法运算,确定待解码顶点的真实纹理坐标。
应理解,本实施例中执行第二移位操作的具体实施方式与上述实施例中执行第二移位操作的实施方式一致,在此不做重复阐述。
可选地,所述基于所述待解码顶点的第八目标纹理坐标和所述待解码顶点的目标残差,确定所述待解码顶点的真实纹理坐标包括:
所述解码端对所述待解码顶点的第八目标纹理坐标与所述待解码顶点的目标残差做加法运算,确定所述待解码顶点的第九目标纹理坐标;
所述解码端对所述待解码顶点的第九目标纹理坐标执行第二移位操作,确定所述待解码顶点的真实纹理坐标。
本实施例中,可以对待解码顶点的第八目标纹理坐标与目标残差做加法运算,确定待解码顶点的第九目标纹理坐标,进一步的,对上述第九目标纹理坐标执行第二移位操作,确定待解码顶点的真实纹理坐标。
应理解,本实施例中执行第二移位操作的具体实施方式与上述实施例中执行第二移位操作的实施方式一致,在此不做重复阐述。
可选地,所述获取待解码顶点在目标三角形中的预测纹理坐标包括:
所述解码端根据目标三角形的各个顶点的几何坐标,获取待解码顶点在第一边上的投影点的纹理坐标;
所述解码端根据所述投影点的纹理坐标,获取所述待解码顶点的预测纹理坐标。
可选地,所述根据目标三角形的各个顶点的几何坐标,获取待解码顶点在第一边上的投影点的纹理坐标包括:
所述解码端根据与Nuv的和值获取待解码顶点在第一边上的投影点的纹理坐标,或者根据Nuv的差值获取待解码顶点在第一边上的投影点的纹理坐标;
其中,Nuv为所述目标三角形的所述第一边上的顶点N的纹理坐标,为所述目标三角形的所述第一边上的顶点N至所述待解码顶点在所述第一边上的预测投影点X的纹理坐标的向量,为所述第一边上的预测投影点X至所述目标三角形的所述第一边上的顶点N的纹理坐标的向量。
本实施例中解码端可以根据第一公式获取待解码顶点在第一边上的投影点的纹理坐标。
其中,第一公式为或者
Xuv为所述待解码顶点在所述第一边上的投影点的纹理坐标,Nuv为所述目标三角形的所述第一边上的顶点N的纹理坐标,为所述目标三角形的所述第一边上的顶点N至所述待解码顶点在所述第一边上的预测投影点X的纹理坐标的向量,为所述第一边上的预测投影点X至所述目标三角形的所述第一边上的顶点N的纹理坐标的向量; 为所述第一边上的顶点N至顶点P的几何坐标的向量;为所述第一边上的顶点N至待解码顶点的几何坐标CG的向量;为所述第一边上的顶点N至顶点P的纹理坐标的向量;为所述第一边上的顶点N至顶点P的几何坐标的向量。
可选地,所述根据所述投影点的纹理坐标,获取所述待解码顶点的预测纹理坐标包括:
所述解码端在所述第一边对应的第一顶点O为已解码顶点,或者第一三角形不为退化三角形的情况下,根据Xuv获取所述待解码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
其中,Xuv为所述待解码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待解码顶点在所述第一边上的预测投影点X至待解码顶点C的纹理坐标的向量。
可选地,其中,为所述第一边上的预测投影点X至待解码顶点的几何坐标CG的向量,为所述第一边上的顶点N至顶点P的几何坐标的向量,为所述第一边上的顶点N至顶点P的纹理坐标的向量,Rotated表示对向量进行90度旋转。
本实施例中,解码端可以根据第二公式获取待解码顶点的纹理坐标。
其中,第二公式为
PredC_NP为所述待解码顶点的预测纹理坐标,Xuv为所述待解码顶点在所述第一边上的预测投影点X的纹理坐标;为所述待解码顶点在所述第一边上的预测投影点X至待解码顶点的纹理坐标Cuv的向量;Ouv为所述目标三角形的所述第一边对应的第一顶点的纹理坐标;为所述待解码顶点在所述第一边上的预测投影点X至所述待解码顶点的纹理坐标Cuv的向量;
可选地,所述根据所述投影点的纹理坐标,获取所述待解码顶点的预测纹理坐标包括:
所述解码端在所述第一边对应第一顶点O为未解码顶点,或者第一三角形为退化三角形的情况下,根据读取到的所述待解码点对应的目标标识、Xuv确定所述待解码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
其中,Xuv为所述待解码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待解码顶点在所述第一边上的预测投影点X至待解码顶点的纹理坐标Cuv的向量。
本实施例中,解码端可以根据读取到的所述待解码点对应的目标标识和第三公式,确定待解码顶点的纹理坐标。
其中,所述第三公式为
PredC_NP为所述待解码顶点的预测纹理坐标,Xuv为所述待解码顶点在所述第一边上的预测投影点X的纹理坐标;为所述待解码顶点在所述第一边上的预测投影点X至待解码顶点的纹理坐标Cuv的向量;Cuv为所述待解码顶点的纹理坐标;为所述待解码顶点在所述第一边上的预测投影点X至所述待解码顶点的纹理坐标Cuv的向量;所述目标标识用于表征|distance3|与|distance4|之间的大小关系。
需要说明的是,本申请实施例是编码的逆过程,解码框图如图6所示,即UV坐标的解码过程为先解码几何信息与连接关系,再依据几何信息与连接关系对码流进行解码得到残差,然后进行预测UV坐标的获取,最后再利用残差和预测UV坐标,便能获取真实的UV坐标,以实现对UV坐标的解码;本申请实施例中进行UV坐标预测的方式可参见编码端的描述,在此不再赘述。
本申请实施例提供的编码方法,执行主体可以为编码装置。本申请实施例中以编码装置执行编码方法为例,说明本申请实施例提供的编码装置。
如图7所示,本申请实施例还提供了一种编码装置700,包括:
重建模块701,用于根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;
移位模块702,用于根据重建的几何信息以及连接关系,对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;所述移位坐标对应的比特位长度大于所述坐标对应的比特位长度;
编码模块703,用于基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
可选地,所述移位模块702,具体用于:
对于所述目标三维网格中的任一顶点,增加所述顶点的坐标占用的比特位数,获得所述顶点的坐标对应的第一目标比特位;
使用所述第一目标比特位存储所述顶点的坐标,得到所述顶点的移位坐标。
可选地,所述移位模块702,还具体用于:
使用第一移位参数对所述顶点的坐标进行二进制左移。
可选地,所述编码模块703,具体用于:
基于所述每个顶点的移位坐标,确定所述目标三维网格中每个顶点的N个预测纹理坐标,N为大于1的正整数;
编码每个顶点的纹理坐标残差;顶点的纹理坐标残差基于所述顶点的N个预测纹理坐标确定。
可选地,所述编码模块703,还具体用于:
在边集合中选择第一边,将所述第一边对应的三角形以及以待编码顶点为对顶点且不包括所述第一边的三角形,确定为目标三角形;所述目标三角形中除待编码顶点之外的顶点为已编码顶点,所述第一边对应的三角形中所述第一边的对顶点为待编码顶点;
对于每个目标三角形,获取待编码顶点在目标三角形中的预测纹理坐标。
可选地,所述编码模块703,还具体用于:
对于任意一个顶点,将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第一目标纹理坐标;
对所述顶点的第一目标纹理坐标执行第二移位操作,得到第二目标纹理坐标;
编码所述顶点的纹理坐标残差,所述纹理坐标残差基于所述顶点的真实纹理坐标和所述顶点的第二目标纹理坐标确定。
可选地,所述编码模块703,还具体用于:
减少所述顶点的第一目标纹理坐标占用的比特位数,获得所述顶点的第一目标纹理坐标对应的第二目标比特位;
使用所述第二目标比特位存储所述顶点的第一目标纹理坐标,得到所述顶点的第二目标纹理坐标;所述第二目标纹理坐标对应的比特位长度小于所述第一目标纹理坐标对应的比特位长度。
可选地,所述编码模块703,还具体用于:
使用第二移位参数对所述顶点的第一目标纹理坐标进行二进制右移。
可选地,所述编码模块703,还具体用于:
对于任意一个顶点,将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第一目标纹理坐标;
基于所述顶点的真实纹理坐标和所述顶点的第一目标纹理坐标,确定所述顶点的纹理坐标残差;
对执行第二移位操作后的纹理坐标残差进行编码。
可选地,所述编码模块703,还具体用于:
基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
可选地,所述编码模块703,还具体用于:
在边集合中选择第一边,获取待编码顶点在目标三角形中的预测纹理坐标;所述目标三角形为所述第一边对应的三角形,且所述目标三角形的对顶点为所述待编码顶点;
编码所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标之间的纹理坐标残差。
可选地,所述编码模块703,还具体用于:
对所述待编码顶点的预测纹理坐标执行第二移位操作,得到待编码顶点的第三目标纹理坐标;
编码所述待编码顶点的真实纹理坐标和所述待编码顶点的第三目标纹理坐标之间的 纹理坐标残差。
可选地,所述编码模块703,还具体用于:
基于所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标,确定所述待编码顶点的纹理坐标残差;
编码执行第二移位操作后的纹理坐标残差。
可选地,所述编码模块703,还具体用于:
根据目标三角形的各个顶点的几何坐标,获取待编码顶点在第一边上的投影点的纹理坐标;
根据所述投影点的纹理坐标,获取所述待编码顶点的预测纹理坐标。
可选地,所述编码模块703,还具体用于:
根据与Nuv的和值获取待编码顶点在第一边上的投影点的纹理坐标,或者根据Nuv的差值获取待编码顶点在第一边上的投影点的纹理坐标;
其中,Nuv为所述目标三角形的所述第一边上的顶点N的纹理坐标,为所述目标三角形的所述第一边上的顶点N至所述待编码顶点在所述第一边上的预测投影点X的纹理坐标的向量,为所述第一边上的预测投影点X至所述目标三角形的所述第一边上的顶点N的纹理坐标的向量。
可选地,所述编码模块703,还具体用于:
在所述第一边对应的第一顶点O为已编码顶点,或者第一三角形不为退化三角形的情况下,根据Xuv获取所述待编码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
其中,Xuv为所述待编码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待编码顶点在所述第一边上的预测投影点X至待编码顶点C的纹理坐标的向量。
可选地,所述编码模块703,还具体用于:
在所述第一边对应第一顶点O为未编码顶点,或者第一三角形为退化三角形的情况下,根据Xuv获取所述待编码顶点的纹理坐标,并编码所述待编码顶点对应的目标标识;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
其中,Xuv为所述待编码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待编码顶点在所述第一边上的预测投影点X至待编码顶点的纹理坐标Cuv的向量。
在本申请实施例中,根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;根据重建的几何信息以及连接关系,对目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;基于每个顶点的移位坐标,对目标三维网格中每个顶点的纹理坐标进行编码。上述方案中,对目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标,其中,移位坐标对应的比特位长度大于坐标对应的比特位长度,以此使用比特位数更多的比特位存储顶点的移位 坐标,在后续的基于移位坐标进行的UV坐标预测的过程中,使用高精度的方式存储坐标数据,避免预测残差由于数据精度的丢失而增大,进而提高纹理坐标的编码效率。
该装置实施例与上述图1所示的编码方法实施例对应,上述方法实施例中关于编码端的各个实施过程和实现方式均可适用于该装置实施例中,且能达到相同的技术效果。
本申请实施例提供的解码方法,执行主体可以为解码装置。本申请实施例中以解码装置执行解码方法为例,说明本申请实施例提供的解码装置。
如图8所示,本申请实施例还提供了一种解码装置800,包括:
解码模块801,用于解码获取的与目标三维网格对应的码流得到所述目标三维网格的几何信息以及连接关系,解码获取的与每个顶点对应的码流得到所述每个顶点的纹理坐标残差;
移位模块802,用于对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差;所述目标残差对应的比特位长度大于所述纹理坐标残差对应的比特位长度;
确定模块803,用于基于所述目标三维网格的几何信息以及连接关系以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
可选地,所述移位模块802,具体用于:
对于任一顶点,增加所述顶点的纹理坐标残差占用的比特位数,获得所述顶点的纹理坐标残差对应的第三目标比特位;
使用所述第三目标比特位存储所述顶点的纹理坐标残差,得到所述顶点的目标残差。
可选地,所述移位模块802,还具体用于:
使用第一移位参数对所述顶点的纹理坐标残差进行二进制左移。
可选地,所述确定模块803,具体用于:
确定所述目标三维网格中每个顶点的N个预测纹理坐标,N为大于1的正整数;
对所述每个顶点的N个预测纹理坐标执行第一移位操作,得到所述每个顶点的N个第四目标纹理坐标;所述第四目标纹理坐标对应的比特位长度大于所述预测纹理坐标对应的比特位长度;
基于所述每个顶点的N个第四目标纹理坐标和所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
可选地,所述确定模块803,还具体用于:
在边集合中选择第一边,将所述第一边对应的三角形以及以待解码顶点为对顶点且不包括所述第一边的三角形,确定为目标三角形;所述目标三角形中除待解码顶点之外的顶点为已解码顶点,所述第一边对应的三角形中所述第一边的对顶点为待解码顶点;
对于每个目标三角形,获取待解码顶点在目标三角形中的预测纹理坐标。
可选地,所述确定模块803,还具体用于:
对于任意一个顶点,对所述顶点的N个第四目标坐标执行得到第二移位操作得到所述顶点的N个预测纹理坐标,对所述顶点的目标残差执行第二移位操作得到所述顶点的纹理 坐标残差;
将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第五目标纹理坐标;
对所述顶点的第五目标纹理坐标与所述顶点的纹理坐标残差做加法运算,确定所述顶点的真实纹理坐标。
可选地,所述确定模块803,还具体用于:
将所述顶点的N个第四目标纹理坐标对应的目标值确定为所述顶点的第六目标纹理坐标;
对所述顶点的第六目标纹理坐标与所述顶点的目标残差做加法运算,确定所述顶点的第七目标纹理坐标;
对所述顶点的第七目标纹理坐标执行第二移位操作,确定所述顶点的真实纹理坐标。
可选地,所述确定模块803,还具体用于:
解码获取目标三维网格中每个顶点的真实纹理坐标。
可选地,所述确定模块803,还具体用于:
在边集合中选择第一边,获取待解码顶点在目标三角形中的预测纹理坐标;所述目标三角形为所述第一边对应的三角形,且所述目标三角形的对顶点为所述待解码顶点;
对所述待解码顶点的预测纹理坐标执行第一移位操作,得到所述待解码顶点的第八目标纹理坐标;所述第八目标纹理坐标对应的比特位长度大于所述预测纹理坐标对应的比特位长度;
基于所述待解码顶点的第八目标纹理坐标和所述待解码顶点的目标残差,确定所述待解码顶点的真实纹理坐标。
可选地,所述确定模块803,还具体用于:
对所述待解码顶点的第八目标纹理坐标执行第二移位操作得到所述顶点的预测纹理坐标,对所述待解码顶点的目标残差执行第二移位操作得到所述顶点的纹理坐标残差;
对所述待解码顶点的预测纹理坐标与所述待解码顶点的纹理坐标残差做加法运算,确定所述待解码顶点的真实纹理坐标。
可选地,所述确定模块803,还具体用于:
对所述待解码顶点的第八目标纹理坐标与所述待解码顶点的目标残差做加法运算,确定所述待解码顶点的第九目标纹理坐标;
对所述待解码顶点的第九目标纹理坐标执行第二移位操作,确定所述待解码顶点的真实纹理坐标。
可选地,所述确定模块803,还具体用于:
根据目标三角形的各个顶点的几何坐标,获取待解码顶点在第一边上的投影点的纹理坐标;
根据所述投影点的纹理坐标,获取所述待解码顶点的预测纹理坐标。
可选地,所述确定模块803,还具体用于:
根据与Nuv的和值获取待解码顶点在第一边上的投影点的纹理坐标,或者根据Nuv的差值获取待解码顶点在第一边上的投影点的纹理坐标;
其中,Nuv为所述目标三角形的所述第一边上的顶点N的纹理坐标,为所述目标三角形的所述第一边上的顶点N至所述待解码顶点在所述第一边上的预测投影点X的纹理坐标的向量,为所述第一边上的预测投影点X至所述目标三角形的所述第一边上的顶点N的纹理坐标的向量。
可选地,所述确定模块803,还具体用于:
在所述第一边对应的第一顶点O为已解码顶点,或者第一三角形不为退化三角形的情况下,根据Xuv获取所述待解码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
其中,Xuv为所述待解码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待解码顶点在所述第一边上的预测投影点X至待解码顶点C的纹理坐标的向量。
可选地,所述确定模块803,还具体用于:
在所述第一边对应第一顶点O为未解码顶点,或者第一三角形为退化三角形的情况下,根据读取到的所述待解码点对应的目标标识、Xuv确定所述待解码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
其中,Xuv为所述待解码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待解码顶点在所述第一边上的预测投影点X至待解码顶点的纹理坐标Cuv的向量。
本申请实施例中,解码获取的与目标三维网格对应的码流得到目标三维网格的几何信息和连接关系,解码获取的与每个顶点对应的码流得到每个顶点的纹理坐标残差;对每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差;基于目标三维网格的几何信息和连接关系,以及每个顶点的目标残差,确定每个顶点的真实纹理坐标。上述方案中,对每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差,其中,目标残差对应的比特位长度大于纹理坐标残差对应的比特位长度,以此使用比特位数更多的比特位存储目标残差,在后续的基于目标残差进行的UV坐标预测的过程中,使用高精度的方式存储坐标数据,避免预测残差由于数据精度的丢失而增大,进而提高纹理坐标的解码效率。
本申请实施例提供的解码装置能够实现图5的方法实施例实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例中的编码装置和解码装置可以是电子设备,例如具有操作系统的电子设备,也可以是电子设备中的部件、例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,终端可以包括但不限于上述所列举的终端的类型,其他设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)等,本申请实施例不作具体限定。
可选地,如图9所示,本申请实施例还提供一种通信设备900,包括处理器901和存储器902,存储器902上存储有可在所述处理器901上运行的程序或指令,例如,该通信设备900为终端时,该程序或指令被处理器901执行时实现上述编码方法实施例的各个步骤,且能达到相同的技术效果,或实现上述解码方法实施例的各个步骤,且能达到相同的技术效果。
本申请实施例还提供一种终端,包括处理器901和通信接口,处理器901用于执行以下操作:
根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;
根据重建的几何信息以及连接关系,对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;
基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
或者,处理器901用于执行以下操作:
解码获取的与目标三维网格对应的码流得到所述目标三维网格的几何信息和连接关系,解码获取的与每个顶点对应的码流得到所述每个顶点的纹理坐标残差;
对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差;
基于所述目标三维网格的几何信息和连接关系,以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
该终端实施例与上述终端侧方法实施例对应,上述方法实施例的各个实施过程和实现方式均可适用于该终端实施例中,且能达到相同的技术效果。具体地,图10为实现本申请实施例的一种终端的硬件结构示意图。
该终端1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等部件。
本领域技术人员可以理解,终端1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图10中示出的终端结构并不构成对终端的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
应理解的是,本申请实施例中,输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072中的至少一种。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和 触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
本申请实施例中,射频单元1001接收来自网络侧设备的下行数据后,可以传输给处理器1010进行处理;射频单元1001可以向网络侧设备发送上行数据。通常,射频单元1001包括但不限于天线、放大器、收发信机、耦合器、低噪声放大器、双工器等。
存储器1009可用于存储软件程序或指令以及各种数据。存储器1009可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1009可以包括易失性存储器或非易失性存储器,或者,存储器1009可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器1009包括但不限于这些和任意其它适合类型的存储器。
处理器1010可包括一个或多个处理单元;可选地,处理器1010集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
其中,处理器1010用于执行以下操作:
根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;
根据重建的几何信息以及连接关系,对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;
基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
或者,处理器1010用于执行以下操作:
解码获取的与目标三维网格对应的码流得到所述目标三维网格的几何信息和连接关系,解码获取的与每个顶点对应的码流得到所述每个顶点的纹理坐标残差;
对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差;
基于所述目标三维网格的几何信息和连接关系,以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述编码方法实施例的各个过程,或实现上述解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述编码方法实施例的各个过程,或实现上述解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
本申请实施例另提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现上述编码方法实施例的各个过程,或实现上述解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例另提供了一种系统,所述系统包括编码端和解码端,所述编码端执行上述编码方法实施例的各个过程,所述编码端执行上述解码方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (36)

  1. 一种编码方法,包括:
    编码端根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;
    所述编码端根据重建的几何信息以及连接关系,对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;所述移位坐标对应的比特位长度大于所述坐标对应的比特位长度;
    所述编码端基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
  2. 根据权利要求1所述的方法,其中,所述对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标包括:
    所述编码端对于所述目标三维网格中的任一顶点,增加所述顶点的坐标占用的比特位数,获得所述顶点的坐标对应的第一目标比特位;
    所述编码端使用所述第一目标比特位存储所述顶点的坐标,得到所述顶点的移位坐标。
  3. 根据权利要求2所述的方法,其中,所述增加所述顶点的坐标占用的比特位数包括:
    所述编码端使用第一移位参数对所述顶点的坐标进行二进制左移。
  4. 根据权利要求1所述的方法,其中,所述基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码包括:
    所述编码端基于所述每个顶点的移位坐标,确定所述目标三维网格中每个顶点的N个预测纹理坐标,N为大于1的正整数;
    所述编码端编码每个顶点的纹理坐标残差;顶点的纹理坐标残差基于所述顶点的N个预测纹理坐标确定。
  5. 根据权利要求4所述的方法,其中,所述基于所述每个顶点的移位坐标,确定所述目标三维网格中每个顶点的N个预测纹理坐标包括:
    所述编码端在边集合中选择第一边,将所述第一边对应的三角形以及以待编码顶点为对顶点且不包括所述第一边的三角形,确定为目标三角形;所述目标三角形中除待编码顶点之外的顶点为已编码顶点,所述第一边对应的三角形中所述第一边的对顶点为待编码顶点;
    所述编码端对于每个目标三角形,获取待编码顶点在目标三角形中的预测纹理坐标。
  6. 根据权利要求4所述的方法,其中,所述编码每个顶点的纹理坐标残差包括:
    所述编码端对于任意一个顶点,将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第一目标纹理坐标;
    所述编码端对所述顶点的第一目标纹理坐标执行第二移位操作,得到第二目标纹理坐标;
    所述编码端编码所述顶点的纹理坐标残差,所述纹理坐标残差基于所述顶点的真实纹 理坐标和所述顶点的第二目标纹理坐标确定。
  7. 根据权利要求6所述的方法,其中,所述对所述顶点的第一目标纹理坐标执行第二移位操作,得到第二目标纹理坐标包括:
    所述编码端减少所述顶点的第一目标纹理坐标占用的比特位数,获得所述顶点的第一目标纹理坐标对应的第二目标比特位;
    所述编码端使用所述第二目标比特位存储所述顶点的第一目标纹理坐标,得到所述顶点的第二目标纹理坐标;所述第二目标纹理坐标对应的比特位长度小于所述第一目标纹理坐标对应的比特位长度。
  8. 根据权利要求7所述的方法,其中,所述减少所述顶点的第一目标纹理坐标占用的比特位数包括:
    所述编码端使用第二移位参数对所述顶点的第一目标纹理坐标进行二进制右移。
  9. 根据权利要求4所述的方法,其中,所述编码每个顶点的纹理坐标残差包括:
    所述编码端对于任意一个顶点,将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第一目标纹理坐标;
    所述编码端基于所述顶点的真实纹理坐标和所述顶点的第一目标纹理坐标,确定所述顶点的纹理坐标残差;
    所述编码端对执行第二移位操作后的纹理坐标残差进行编码。
  10. 根据权利要求1所述的方法,其中,所述基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码包括:
    所述编码端基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
  11. 根据权利要求10所述的方法,其中,所述基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码包括:
    所述编码端在边集合中选择第一边,获取待编码顶点在目标三角形中的预测纹理坐标;所述目标三角形为所述第一边对应的三角形,且所述目标三角形的对顶点为所述待编码顶点;
    所述编码端编码所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标之间的纹理坐标残差。
  12. 根据权利要求11所述的方法,其中,所述编码所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标之间的纹理坐标残差包括:
    所述编码端对所述待编码顶点的预测纹理坐标执行第二移位操作,得到待编码顶点的第三目标纹理坐标;
    所述编码端编码所述待编码顶点的真实纹理坐标和所述待编码顶点的第三目标纹理坐标之间的纹理坐标残差。
  13. 根据权利要求11所述的方法,其中,所述编码所述待编码顶点的真实纹理坐标和 所述待编码顶点的预测纹理坐标之间的纹理坐标残差包括:
    所述编码端基于所述待编码顶点的真实纹理坐标和所述待编码顶点的预测纹理坐标,确定所述待编码顶点的纹理坐标残差;
    所述编码端编码执行第二移位操作后的纹理坐标残差。
  14. 根据权利要求5或11所述的方法,其中,所述获取待编码顶点在目标三角形中的预测纹理坐标包括:
    所述编码端根据目标三角形的各个顶点的几何坐标,获取待编码顶点在第一边上的投影点的纹理坐标;
    所述编码端根据所述投影点的纹理坐标,获取所述待编码顶点的预测纹理坐标。
  15. 根据权利要求14所述的方法,其中,所述根据目标三角形的各个顶点的几何坐标,获取待编码顶点在第一边上的投影点的纹理坐标,包括:
    所述编码端根据与Nuv的和值获取待编码顶点在第一边上的投影点的纹理坐标,或者根据Nuv的差值获取待编码顶点在第一边上的投影点的纹理坐标;
    其中,Nuv为所述目标三角形的所述第一边上的顶点N的纹理坐标,为所述目标三角形的所述第一边上的顶点N至所述待编码顶点在所述第一边上的预测投影点X的纹理坐标的向量,为所述第一边上的预测投影点X至所述目标三角形的所述第一边上的顶点N的纹理坐标的向量。
  16. 根据权利要求14所述的方法,其中,所述根据所述投影点的纹理坐标,获取所述待编码顶点的预测纹理坐标,包括:
    所述编码端在所述第一边对应的第一顶点O为已编码顶点,或者第一三角形不为退化三角形的情况下,根据Xuv获取所述待编码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
    其中,Xuv为所述待编码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待编码顶点在所述第一边上的预测投影点X至待编码顶点C的纹理坐标的向量。
  17. 根据权利要求14所述的方法,其中,所述根据所述投影点的纹理坐标,获取所述待编码顶点的预测纹理坐标,包括:
    所述编码端在所述第一边对应第一顶点O为未编码顶点,或者第一三角形为退化三角形的情况下,根据Xuv获取所述待编码顶点的纹理坐标,并编码所述待编码顶点对应的目标标识;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
    其中,Xuv为所述待编码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待编码顶点在所述第一边上的预测投影点X至待编码顶点的纹理坐标Cuv的向量。
  18. 一种解码方法,包括:
    解码端解码获取的与目标三维网格对应的码流得到所述目标三维网格的几何信息和连接关系,解码获取的与每个顶点对应的码流得到所述每个顶点的纹理坐标残差;
    所述解码端对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差;所述目标残差对应的比特位长度大于所述纹理坐标残差对应的比特位长度;
    所述解码端基于所述目标三维网格的几何信息和连接关系,以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
  19. 根据权利要求18所述的方法,其中,所述对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差包括:
    所述解码端对于任一顶点,增加所述顶点的纹理坐标残差占用的比特位数,获得所述顶点的纹理坐标残差对应的第三目标比特位;
    所述解码端使用所述第三目标比特位存储所述顶点的纹理坐标残差,得到所述顶点的目标残差。
  20. 根据权利要求19所述的方法,其中,所述增加所述顶点的纹理坐标残差占用的比特位数包括:
    所述解码端使用第一移位参数对所述顶点的纹理坐标残差进行二进制左移。
  21. 根据权利要求18所述的方法,其中,所述基于所述目标三维网格的几何信息和连接关系,以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标包括:
    所述解码端确定所述目标三维网格中每个顶点的N个预测纹理坐标,N为大于1的正整数;
    所述解码端对所述每个顶点的N个预测纹理坐标执行第一移位操作,得到所述每个顶点的N个第四目标纹理坐标;所述第四目标纹理坐标对应的比特位长度大于所述预测纹理坐标对应的比特位长度;
    所述解码端基于所述每个顶点的N个第四目标纹理坐标和所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
  22. 根据权利要求21所述的方法,其中,所述确定所述目标三维网格中每个顶点的N个预测纹理坐标包括:
    所述解码端在边集合中选择第一边,将所述第一边对应的三角形以及以待解码顶点为对顶点且不包括所述第一边的三角形,确定为目标三角形;所述目标三角形中除待解码顶点之外的顶点为已解码顶点,所述第一边对应的三角形中所述第一边的对顶点为待解码顶点;
    所述解码端对于每个目标三角形,获取待解码顶点在目标三角形中的预测纹理坐标。
  23. 根据权利要求21所述的方法,其中,所述基于所述每个顶点的N个第四目标纹理坐标和所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标包括:
    所述解码端对于任意一个顶点,对所述顶点的N个第四目标坐标执行得到第二移位操作得到所述顶点的N个预测纹理坐标,对所述顶点的目标残差执行第二移位操作得到所述顶点的纹理坐标残差;
    所述解码端将所述顶点的N个预测纹理坐标对应的目标值确定为所述顶点的第五目 标纹理坐标;
    所述解码端对所述顶点的第五目标纹理坐标与所述顶点的纹理坐标残差做加法运算,确定所述顶点的真实纹理坐标。
  24. 根据权利要求21所述的方法,其中,所述基于所述每个顶点的N个第四目标纹理坐标和所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标包括:
    所述解码端将所述顶点的N个第四目标纹理坐标对应的目标值确定为所述顶点的第六目标纹理坐标;
    所述解码端对所述顶点的第六目标纹理坐标与所述顶点的目标残差做加法运算,确定所述顶点的第七目标纹理坐标;
    所述解码端对所述顶点的第七目标纹理坐标执行第二移位操作,确定所述顶点的真实纹理坐标。
  25. 根据权利要求18所述的方法,其中,所述基于所述目标三维网格的几何信息以及连接关系以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标包括:
    所述解码端解码获取目标三维网格中每个顶点的真实纹理坐标。
  26. 根据权利要求25所述的方法,其中,所述解码获取目标三维网格中每个顶点的真实纹理坐标包括:
    所述解码端在边集合中选择第一边,获取待解码顶点在目标三角形中的预测纹理坐标;所述目标三角形为所述第一边对应的三角形,且所述目标三角形的对顶点为所述待解码顶点;
    所述解码端对所述待解码顶点的预测纹理坐标执行第一移位操作,得到所述待解码顶点的第八目标纹理坐标;所述第八目标纹理坐标对应的比特位长度大于所述预测纹理坐标的对应比特位长度;
    所述解码端基于所述待解码顶点的第八目标纹理坐标和所述待解码顶点的目标残差,确定所述待解码顶点的真实纹理坐标。
  27. 根据权利要求26所述的方法,其中,所述基于所述待解码顶点的第八目标纹理坐标和所述待解码顶点的目标残差,确定所述待解码顶点的真实纹理坐标包括:
    所述解码端对所述待解码顶点的第八目标纹理坐标执行第二移位操作得到所述顶点的预测纹理坐标,对所述待解码顶点的目标残差执行第二移位操作得到所述顶点的纹理坐标残差;
    所述解码端对所述待解码顶点的预测纹理坐标与所述待解码顶点的纹理坐标残差做加法运算,确定所述待解码顶点的真实纹理坐标。
  28. 根据权利要求26所述的方法,其中,所述基于所述待解码顶点的第八目标纹理坐标和所述待解码顶点的目标残差,确定所述待解码顶点的真实纹理坐标包括:
    所述解码端对所述待解码顶点的第八目标纹理坐标与所述待解码顶点的目标残差做加法运算,确定所述待解码顶点的第九目标纹理坐标;
    所述解码端对所述待解码顶点的第九目标纹理坐标执行第二移位操作,确定所述待解码顶点的真实纹理坐标。
  29. 根据权利要求22或26所述的方法,其中,所述获取待解码顶点在目标三角形中的预测纹理坐标包括:
    所述解码端根据目标三角形的各个顶点的几何坐标,获取待解码顶点在第一边上的投影点的纹理坐标;
    所述解码端根据所述投影点的纹理坐标,获取所述待解码顶点的预测纹理坐标。
  30. 根据权利要求29所述的方法,其中,所述根据目标三角形的各个顶点的几何坐标,获取待解码顶点在第一边上的投影点的纹理坐标包括:
    所述解码端根据与Nuv的和值获取待解码顶点在第一边上的投影点的纹理坐标,或者根据Nuv的差值获取待解码顶点在第一边上的投影点的纹理坐标;
    其中,Nuv为所述目标三角形的所述第一边上的顶点N的纹理坐标,为所述目标三角形的所述第一边上的顶点N至所述待解码顶点在所述第一边上的预测投影点X的纹理坐标的向量,为所述第一边上的预测投影点X至所述目标三角形的所述第一边上的顶点N的纹理坐标的向量。
  31. 根据权利要求29所述的方法,其中,所述根据所述投影点的纹理坐标,获取所述待解码顶点的预测纹理坐标包括:
    所述解码端在所述第一边对应的第一顶点O为已解码顶点,或者第一三角形不为退化三角形的情况下,根据Xuv获取所述待解码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
    其中,Xuv为所述待解码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待解码顶点在所述第一边上的预测投影点X至待解码顶点C的纹理坐标的向量。
  32. 根据权利要求29所述的方法,其中,所述根据所述投影点的纹理坐标,获取所述待解码顶点的预测纹理坐标包括:
    所述解码端在所述第一边对应第一顶点O为未解码顶点,或者第一三角形为退化三角形的情况下,根据读取到的所述待解码点对应的目标标识、Xuv确定所述待解码顶点的纹理坐标;所述第一三角形与所述目标三角形具有公共的第一边,且所述第一三角形的第一边的对顶点为所述第一顶点O;
    其中,Xuv为所述待解码顶点在所述第一边上的预测投影点X的纹理坐标,为所述待解码顶点在所述第一边上的预测投影点X至待解码顶点的纹理坐标Cuv的向量。
  33. 一种编码装置,包括:
    重建模块,用于根据目标三维网格的几何信息以及连接关系的编码结果,重建目标三维网格的几何信息以及连接关系;
    移位模块,用于根据重建的几何信息以及连接关系,对所述目标三维网络中每个顶点的坐标执行第一移位操作,得到每个顶点的移位坐标;所述移位坐标对应的比特位长度大 于所述坐标对应的比特位长度;
    编码模块,用于基于所述每个顶点的移位坐标,对所述目标三维网格中每个顶点的纹理坐标进行编码。
  34. 一种解码装置,包括:
    解码模块,用于解码获取的与目标三维网格对应的码流得到所述目标三维网格的几何信息以及连接关系,解码获取的与每个顶点对应的码流得到所述每个顶点的纹理坐标残差;
    移位模块,用于对所述每个顶点的纹理坐标残差执行第一移位操作,得到每个顶点的目标残差;所述目标残差对应的比特位长度大于所述纹理坐标残差对的比特位长度;
    确定模块,用于基于所述目标三维网格的几何信息以及连接关系以及所述每个顶点的目标残差,确定所述每个顶点的真实纹理坐标。
  35. 一种终端,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,其中,所述程序或指令被所述处理器执行时实现如权利要求1-17中任一项所述的编码方法的步骤,或者实现如权利要求18-32中任一项所述的解码方法的步骤。
  36. 一种可读存储介质,所述可读存储介质上存储程序或指令,其中,所述程序或指令被处理器执行时实现如权利要求1-17中任一项所述的编码方法的步骤,或者实现如权利要求18-32中任一项所述的解码方法的步骤。
PCT/CN2023/104351 2022-07-21 2023-06-30 编码、解码方法、装置及设备 WO2024017008A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210865721.3A CN117478901A (zh) 2022-07-21 2022-07-21 编码、解码方法、装置及设备
CN202210865721.3 2022-07-21

Publications (1)

Publication Number Publication Date
WO2024017008A1 true WO2024017008A1 (zh) 2024-01-25

Family

ID=89616986

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/104351 WO2024017008A1 (zh) 2022-07-21 2023-06-30 编码、解码方法、装置及设备

Country Status (2)

Country Link
CN (1) CN117478901A (zh)
WO (1) WO2024017008A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030011611A1 (en) * 2001-07-13 2003-01-16 Sony Computer Entertainment Inc. Rendering process
US20090080516A1 (en) * 2005-01-14 2009-03-26 Eun Young Chang Method of encoding and decoding texture coordinates in three-dimensional mesh information for effective texture mapping
US20180253867A1 (en) * 2017-03-06 2018-09-06 Canon Kabushiki Kaisha Encoding and decoding of texture mapping data in textured 3d mesh models
US20210090301A1 (en) * 2019-09-24 2021-03-25 Apple Inc. Three-Dimensional Mesh Compression Using a Video Encoder
CN114402621A (zh) * 2019-09-30 2022-04-26 Oppo广东移动通信有限公司 变换方法、逆变换方法、编码器、解码器及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030011611A1 (en) * 2001-07-13 2003-01-16 Sony Computer Entertainment Inc. Rendering process
US20090080516A1 (en) * 2005-01-14 2009-03-26 Eun Young Chang Method of encoding and decoding texture coordinates in three-dimensional mesh information for effective texture mapping
US20180253867A1 (en) * 2017-03-06 2018-09-06 Canon Kabushiki Kaisha Encoding and decoding of texture mapping data in textured 3d mesh models
US20210090301A1 (en) * 2019-09-24 2021-03-25 Apple Inc. Three-Dimensional Mesh Compression Using a Video Encoder
CN114402621A (zh) * 2019-09-30 2022-04-26 Oppo广东移动通信有限公司 变换方法、逆变换方法、编码器、解码器及存储介质

Also Published As

Publication number Publication date
CN117478901A (zh) 2024-01-30

Similar Documents

Publication Publication Date Title
JP5981566B2 (ja) 3dモデルを表現するビットストリームを処理するための方法及び装置
CN108810571A (zh) 编码和解码二维点云的方法和设备
US10397612B2 (en) Three-dimensional video encoding method, three-dimensional video decoding method, and related apparatus
CN103546158A (zh) 压缩深度高速缓存
JP2015504545A (ja) 予測位置符号化
WO2014166434A1 (zh) 深度图像的编解码方法和编解码装置
KR100927601B1 (ko) 3차원 메쉬 정보의 부호화/복호화 방법 및 장치
CN105279730A (zh) 对于动态生成的图形资源的压缩技术
JP2008527787A (ja) 効果的なテクスチャマッピングのための3次元メッシュ情報のテクスチャ座標符号化及び復号化方法
JP2014027658A (ja) 圧縮エンコーディング及びデコーディング方法並びに装置
WO2022257971A1 (zh) 点云编码处理方法、点云解码处理方法及相关设备
WO2022121650A1 (zh) 点云属性的预测方法、编码器、解码器及存储介质
RU2668708C1 (ru) Усовершенствованное сжатие и шифрование файла
WO2024017008A1 (zh) 编码、解码方法、装置及设备
US10553035B2 (en) Valence based implicit traversal for improved compression of triangular meshes
WO2024007951A1 (zh) 编码、解码方法、装置及设备
WO2024083043A1 (zh) 网格编码方法、装置、通信设备及可读存储介质
WO2023155779A1 (zh) 编码方法、解码方法、装置及通信设备
WO2024001953A1 (zh) 无损编码方法、无损解码方法、装置及设备
WO2024083039A1 (zh) 网格编码方法、网格解码方法及相关设备
WO2023246686A1 (zh) 无损编码方法、无损解码方法、装置及设备
WO2023098802A1 (zh) 点云属性编码方法、点云属性解码方法及终端
WO2023193707A1 (zh) 编码、解码方法、装置及设备
WO2024120325A1 (zh) 点云编码方法、点云解码方法及终端
WO2023098803A1 (zh) 点云编码处理方法、点云解码处理方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23842073

Country of ref document: EP

Kind code of ref document: A1