CN118302794A - Grid geometry coding - Google Patents

Grid geometry coding Download PDF

Info

Publication number
CN118302794A
CN118302794A CN202380013402.2A CN202380013402A CN118302794A CN 118302794 A CN118302794 A CN 118302794A CN 202380013402 A CN202380013402 A CN 202380013402A CN 118302794 A CN118302794 A CN 118302794A
Authority
CN
China
Prior art keywords
bits
normal
data
depth
scaling factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202380013402.2A
Other languages
Chinese (zh)
Inventor
D·格拉兹斯
A·扎格托
A·塔巴塔贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Sony Optical Archive Inc
Original Assignee
Sony Group Corp
Optical Archive Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Group Corp, Optical Archive Inc filed Critical Sony Group Corp
Publication of CN118302794A publication Critical patent/CN118302794A/en
Pending legal-status Critical Current

Links

Abstract

Depth image generation is improved by more efficient encoding using a video codec. Mapping of depth to luma channels is performed by not using all available bits, and the remaining bits are used to generate and incorporate depth scaling factors into bilinear interpolation algorithms used during rasterization. A normal filtering process is described in which the position of the vertices is adjusted according to the normals estimated from the surface pixels. After decoding the depth image, pixels related to the surface of the triangle are collected and used to estimate the plane and the normal to the plane. The normal is compared to a normal obtained from a plane defined by the three vertices of the triangle. If there is no match, the position of the vertex is adjusted to match the estimated normal from the pixel surface. The adjustment may follow an iterative minimization procedure.

Description

Grid geometry coding
Cross Reference to Related Applications
The present application is based on 35U.S. c. ≡119 (e) claiming priority from U.S. provisional patent application serial No. 63/269,915 entitled "MESH GEOMETRY CODING," filed on 3/25 of 2022, which is incorporated herein by reference in its entirety for all purposes.
Technical Field
The present invention relates to three-dimensional graphics. More particularly, the present invention relates to the encoding of three-dimensional graphics.
Background
Recently, a novel method of compressing volumetric content, such as point clouds, based on 3D to 2D projections is being standardized. This method, also known as V3C (visual volume video based compression), maps 3D volume data into several 2D patches, which are then further arranged into an atlas image, which is then encoded with a video encoder. The atlas image corresponds to the point geometry, the respective texture, and an occupancy map (occupancy map) indicating which locations are to be considered for point cloud reconstruction.
In 2017, MPEG promulgated a proposal symptom set (CfP) for compression of point clouds. After evaluating several proposals, MPEG is currently considering two different techniques for point cloud compression: 3D native (active) coding techniques (based on octree and similar coding methods), or 3D-to-2D projection, followed by conventional video coding. In the case of dynamic 3D scenes, MPEG is using test model software (TMC 2) based on patch surface modeling, projection of patches from 3D to 2D images, and encoding 2D images with a video encoder such as HEVC. This approach has proven to be more efficient than native 3D coding and can achieve competitive bit rates with acceptable quality.
Due to the success of projection-based methods (also known as video-based methods, or V-PCC) to encode 3D point clouds, the standard is expected to include more 3D data, such as a 3D mesh, in future versions. However, the current version of the standard is only applicable to the transmission of unconnected point sets, and therefore there is no mechanism to send the connectivity of the points, which is necessary in 3D mesh compression.
Methods have been proposed to extend the functionality of V-PCC also to grids. One possible way is to encode the vertices using V-PCC and then encode connectivity using a mesh compression method like TFAN or Edgebreaker. The limitation of this approach is that the original mesh must be dense so that the point cloud generated from the vertices is not sparse and can be efficiently encoded after projection. Furthermore, the order of vertices affects the encoding of connectivity, and different methods of reorganizing mesh connectivity have been proposed. An alternative way to encode a sparse grid is to encode the vertex positions in 3D using RAW (RAW) patch data. Since the original patch encodes (x, y, z) directly, in this approach all vertices are encoded as the original data, while connectivity is encoded by a similar grid compression method as described previously. Note that in the original patch, vertices may be sent in any order of preference, and thus the order generated from connectivity encoding may be used. This approach may encode a sparse point cloud, however, the original patch is not efficient in encoding 3D data and further data such as the properties of the triangle facets may be lost from the approach.
Disclosure of Invention
Depth image generation is improved by more efficient encoding using a video codec. Mapping of depth to luma channels is performed by not using all available bits, and the remaining bits are used to generate and incorporate depth scaling factors into bilinear interpolation algorithms used during rasterization. A normal filtering process is described in which the position of the vertices is adjusted according to the normals estimated from the surface pixels. After decoding the depth image, pixels related to the surface of the triangle are collected and used to estimate the plane and the normal to the plane. The normal is compared to a normal obtained from a plane defined by the three vertices of the triangle. If there is no match, the position of the vertex is adjusted to match the estimated normal from the pixel surface. The adjustment may follow an iterative minimization procedure.
In one aspect, a method of mesh geometry encoding includes: mapping the depth information to the luminance channel using less than all available bits, generating a depth scaling factor that is combined to a bilinear interpolation algorithm used during rasterization, and performing normal filtering that includes adjusting the position of the vertices according to the estimated normals from the surface pixels. Mapping depth information to a luma channel includes using M bits of N available bits, where M is less than N. The generating a depth scaling factor utilizes 2 remaining bits of the N available bits. The 2 remaining bits enable the data to be multiplied by 4 such that the last two bits of the data are 0 and enable the most significant bit alignment to be used. The method includes applying a depth scaling factor to data, wherein the data includes floating point values. Normal filtering involves performing a plane fit using points within the triangle. Normal filtering involves using a one-, two-, or three-torus neighborhood to minimize the total normal angle.
In another aspect, an apparatus includes a non-transitory memory for storing an application for: mapping the depth information to the luminance channel using less than all available bits, generating a depth scaling factor that is incorporated into a bilinear interpolation algorithm used during rasterization, and performing normal filtering that includes adjusting the position of vertices according to estimated normals from surface pixels, and a processor coupled to memory, the processor configured to process the application. Mapping depth information to a luma channel includes using M bits of N available bits, where M is less than N. The generating a depth scaling factor utilizes 2 remaining bits of the N available bits. The 2 remaining bits enable the data to be multiplied by 4 such that the last two bits of the data are 0 and enable the most significant bit alignment to be used. The application is configured to apply the depth scaling factor to data, wherein the data includes floating point values. Normal filtering involves performing a plane fit using points within the triangle. Normal filtering involves using a one-, two-, or three-torus neighborhood to minimize the total normal angle.
In another aspect, a system includes: one or more cameras for acquiring three-dimensional content, an encoder for encoding the three-dimensional content: mapping the depth information to the luminance channel using less than all available bits, generating a depth scaling factor that is incorporated into a bilinear interpolation algorithm used during rasterization, and performing normal filtering that includes adjusting the position of the vertices according to the estimated normals from the surface pixels. Mapping depth information to a luma channel includes using M bits of N available bits, where M is less than N. The generating a depth scaling factor utilizes 2 remaining bits of the N available bits. The 2 remaining bits enable the data to be multiplied by 4 such that the last two bits of the data are 0 and enable the most significant bit alignment to be used. The encoder is configured to apply a depth scaling factor to data, wherein the data comprises floating point values. Normal filtering involves performing a plane fit using points within the triangle. Normal filtering involves using a one-, two-, or three-torus neighborhood to minimize the total normal angle.
Drawings
Fig. 1 illustrates a simplified diagram of encoding depth information using MSB alignment, according to some embodiments.
FIG. 2 illustrates a simplified diagram of normal filtering according to some embodiments.
FIG. 3 illustrates a diagram of one-, two-, and three-torus neighbors for minimizing the total normal angle according to some embodiments.
Fig. 4 illustrates a flow chart of a grid geometry encoding method in accordance with some embodiments.
Fig. 5 illustrates a block diagram of an exemplary computing device configured to implement the grid geometry encoding method according to some embodiments.
Detailed Description
Depth image generation can be improved by more efficient encoding using a video codec. Mapping of depth to luminance channels is performed by not using all available bits (e.g., using only 6 bits instead of 8 bits), and the remaining bits are used to generate a depth scaling factor that is incorporated into the bilinear interpolation algorithm used during rasterization. In this way, when rasterizing the surface of the triangle, higher precision values can be used due to the depth scaling factor. A normal filtering process is also described in which the position of the vertices is adjusted according to the normals estimated from the surface pixels. After decoding the depth image, pixels related to the surface of the triangle are collected and used to estimate the plane and thus the normal to the plane. The normal is then compared to a normal obtained from a plane defined by the three vertices of the triangle. If there is no match to a certain threshold, the position of the vertex is adjusted to match the estimated normal from the pixel surface. The adjustment may follow an iterative minimization process that finds the total minimum deviation of the normal angles within a one-, two-, or three-torus neighborhood of a given face.
Fig. 1 illustrates a simplified diagram of encoding depth information using MSB alignment, according to some embodiments. The triangle is projected. Once the triangle is projected, the depth image (distance between the triangle and the projection surface) generates the image (e.g., gray area) shown in image 100. For example, a patch from an image is projected onto a surface from the image. The range (e.g., 8 bits versus 6 bits) determines the extent to which the image is involved. For 8 bits, values of 0-256 are available, but for 6 bits, values of 0-64 are available.
One advantage of using N bits (e.g., 8 bits) is that more triangles can be put together in one patch. When using M bits (e.g., 6 bits), the patch will be partitioned because all triangles will not be able to be combined together. However, when M bits (e.g., 6 bits) are used and information is put into video using N bits (e.g., 8 bits) for a light-emitting channel, there are unused N-M bits (e.g., 2 bits). The N-M bits (e.g., 2 bits) can be used for video scaling (e.g., all values multiplied by 4). For example, 32×4=128, 31×4=124, 30×4=120. Relatively, these values are the same, but the last two bits are always zero. By setting the MSB value equal to "true" (most significant bit alignment), the video encoder then performs video scaling. At the decoder side, these values can be divided by 4 to return to the original value. By performing video scaling, there is a banding from quantization (e.g., from 128 to 124 is a difference large enough to produce banding).
Instead of implementing video scaling, patch scaling can be performed. When an image is rasterized, the image can be rasterized into a floating point value. Then, when multiplied by 4, the values are 32×4=128, 31.5x4=126, and 30.75x4=123, so the values are closer to each other, making the transition smoother, and reducing banding.
FIG. 2 illustrates a simplified diagram of normal filtering according to some embodiments. When decoding a depth image, the point (e.g., in diagram 200) is sampled. However, there may be some distortion after video compression (e.g., the colors are slightly different as shown in diagram 202). Since the points are sampling the surface of the triangle, the points should more or less surround the surface, as shown in diagram 206. Patch scaling multiplied by a floating point number may result in less quantization error such that the point is slightly above or below the surface. In addition, there is a video error (e.g., 128 becomes 129). These errors are indicated in diagram 202, with some values above or below the appropriate values.
For diagram 200, the normal is determined by taking the cross product of the vectors of the triangle (e.g., the vector from the point of the top vertex to the point of the lower left vertex, and the vector from the point of the top vertex to the point of the lower right vertex). For the sketch 202, the normals are determined in the same way, but are affected by the distortion of the points, as they have moved and slightly changed. Thus, the normal to the triangle in diagram 202 is slightly different (e.g., in a different direction) than the normal to the triangle in diagram 200.
Another way to calculate the normal is to perform a plane fit as shown in diagram 204. Instead of using vertices of triangles, all points (including vertices within the triangle) are used to find planes that traverse the points and minimize errors between points and planes. The normal to the plane can be calculated. The resulting normal is typically closer to the original normal because more points are used.
If the normal calculated by planar fitting is closer to the original normal, then the coordinates of the vertices can be adjusted to be closer to the fitted normal. In some embodiments, the normal calculated by planar fitting is compared to the original normal, and if the calculated normal is within a threshold amount of the original normal, the calculated normal by planar fitting is used. To fix the position of the vertex, the coordinate position perpendicular to the (normal to) projection plane is adjusted (normal and double tangent coordinates are coded losslessly). By considering all triangles connected to vertices, vertex adjustment via normal filtering can be performed simultaneously. The multidimensional problem becomes an optimization problem that can be solved with linear equations.
FIG. 3 illustrates a diagram of one-, two-, and three-torus neighbors for minimizing the total normal angle according to some embodiments. Each set of triangles surrounding a particular triangle is considered a ring. For example, for triangle 300, the first ring 302 of the triangle is twelve triangles surrounding triangle 300. The triangular second ring 304 is 24 triangles around the first ring 302. The third ring 306 of triangles is 36 triangles around the second ring 304. The normal to each ring can be used for normal analysis to determine a better, more reliable normal value. The normal values determined using one, two, or three rings can be used to adjust the normal values of the triangle (e.g., triangle 300) and/or the position of the vertices of the triangle so that they generate normals that fit the improved normal values.
Fig. 4 illustrates a flow chart of a grid geometry encoding method in accordance with some embodiments. In step 400, depth information is mapped to a luminance channel using less than all available bits. Mapping the depth information to the luma channel includes using M bits of the N available bits, where M is less than N, (e.g., 14 of 6 of 8, 8 of 10, or 16). The remaining N-M bits (e.g., 8-6=2) are used for precision improvement (scaling factor). In step 402, a depth scaling factor is generated that is incorporated into a bilinear interpolation algorithm used during rasterization. The depth scaling factor is generated using 2 remaining bits of the N available bits (e.g., 8 bits). The 2 remaining bits enable the data to be multiplied by 4, so that the last two bits of the data are 0, and so that the most significant bit alignment can be used. In some embodiments, a depth scaling factor is applied to the floating point value. In step 404, normal filtering is performed, including adjusting the position of the vertex based on the estimated normal from the surface pixel. Normal filtering involves performing a plane fit using points within the triangle. Normal filtering involves using a one-, two-, or three-torus neighborhood to minimize the total normal angle. In some embodiments, fewer or additional steps are implemented. In some embodiments, the order of the steps is modified.
Fig. 5 illustrates a block diagram of an exemplary computing device configured to implement the grid geometry encoding method according to some embodiments. The computing device 500 can be used to obtain, store, calculate, process, transfer, and/or display information, such as images and video, including 3D content. The computing device 500 is capable of implementing any of the encoding/decoding aspects. In general, hardware structures suitable for implementing computing device 500 include a network interface 502, memory 504, processor 506, I/O device(s) 508, bus 510, and storage device 512. The choice of processor is not critical as long as a suitable processor with sufficient speed is selected. Memory 504 can be any conventional computer memory known in the art. Storage 512 can include a hard disk drive, CDROM, CDRW, DVD, DVDRW, a high definition optical disk/drive, an ultra high definition drive, a flash memory card, or any other storage device. The computing device 500 can include one or more network interfaces 502. Examples of network interfaces include network cards that connect to an ethernet or other type of LAN. The I/O device(s) 508 can include one or more of the following: keyboard, mouse, monitor, screen, printer, modem, touch screen, button interface, and other devices. The grid geometry encoding application(s) 530 used to implement the grid geometry encoding implementation are likely to be stored in the storage device 512 and memory 504 and processed as the application would normally be. More or fewer components shown in fig. 5 can be included in the computing device 500. In some embodiments, grid geometry encoding hardware 520 is included. Although the computing device 500 in fig. 5 includes an application 530 and hardware 520 for a grid geometry encoding implementation, the grid geometry encoding method can be implemented on a computing device in hardware, firmware, software, or any combination thereof. For example, in some embodiments, the grid geometry encoding application 530 is programmed in memory and executed using a processor. In another example, in some embodiments, the trellis geometry encoding hardware 520 is programming hardware logic that includes gates specifically designed to implement the trellis geometry encoding method.
In some embodiments, the grid geometry encoding application(s) 530 include several applications and/or modules. In some embodiments, the module also includes one or more sub-modules. In some embodiments, fewer or additional modules can be included.
Examples of suitable computing devices include personal computers, laptop computers, computer workstations, servers, mainframe computers, handheld computers, personal digital assistants, cellular/mobile phones, smart appliances, gaming machines, digital cameras, digital video cameras, camera phones, smart phones, portable music players, tablet computers, mobile devices, video players, video disc writers/players (e.g., DVD writers/players, high definition disc writers/players, ultra-high definition disc writers/players), televisions, home entertainment systems, augmented reality devices, virtual reality devices, smart jewelry (e.g., smart watches), vehicles (e.g., autopilot vehicles), or any other suitable computing device.
To utilize the grid geometry encoding method, a device acquires or receives 3D content (e.g., point cloud content). The grid geometry encoding method can be implemented with user assistance or automatically without user involvement.
In operation, the grid geometry encoding method enables more efficient and accurate 3D content encoding than previous implementations. By using a depth scaling factor at the encoder, the video image is smoother and easier to encode. On the decoder side, once the video is reconstructed, the inconsistency between the normal values obtained from the surface pixels and the normal values obtained from the surface defined by only three vertices can be verified. Normal filtering can readjust the positions of vertices to align normals and improve mesh reconstruction. The methods described herein make the image more codec friendly.
Some embodiments of grid geometry encoding
1. A method of mesh geometry encoding, comprising:
Mapping depth information to a luma channel using less than all available bits;
Generating a depth scaling factor that is incorporated into a bilinear interpolation algorithm used during rasterization; and
Normal filtering is performed, including adjusting the position of the vertex based on the estimated normal from the surface pixel.
2. The method of clause 1, wherein mapping the depth information to the luma channel comprises using M bits of the N available bits, wherein M is less than N.
3. The method of clause 2, wherein generating the depth scaling factor utilizes 2 remaining bits of the N available bits.
4. The method of clause 3, wherein 2 remaining bits enable the data to be multiplied by 4, the last two bits of the data to be 0, and the most significant bit alignment to be used.
5. The method of clause 1, further comprising applying the depth scaling factor to data, wherein the data comprises floating point values.
6. The method of clause 1, wherein the normal filtering comprises performing a plane fit using points within the triangle.
7. The method of clause 1, wherein the normal filtering comprises using a one-, two-, or three-torus neighborhood to minimize the total normal angle.
8. An apparatus, comprising:
a non-transitory memory for storing an application program for:
Mapping depth information to a luma channel using less than all available bits;
Generating a depth scaling factor that is incorporated into a bilinear interpolation algorithm used during rasterization; and
Performing normal filtering, including adjusting the position of the vertex based on the estimated normal from the surface pixel; and
A processor coupled to the memory, the processor configured to process the application.
9. The apparatus of clause 8, wherein mapping the depth information to the luma channel comprises using M bits of the N available bits, wherein M is less than N.
10. The apparatus of clause 9, wherein generating the depth scaling factor utilizes 2 remaining bits of the N available bits.
11. The apparatus of clause 10, wherein the 2 remaining bits enable the data to be multiplied by 4, the last two bits of the data to be 0, and the most significant bit alignment to be used.
12. The apparatus of clause 8, wherein the application is configured to apply the depth scaling factor to data, wherein the data comprises a floating point value.
13. The apparatus of clause 8, wherein the normal filtering comprises performing a plane fit using points within the triangle.
14. The apparatus of clause 8, wherein the normal filtering comprises using a one-, two-, or three-torus neighborhood to minimize the total normal angle.
15. A system, comprising:
One or more cameras for acquiring three-dimensional content;
an encoder for encoding three-dimensional content:
Mapping depth information to a luma channel using less than all available bits;
Generating a depth scaling factor that is incorporated into a bilinear interpolation algorithm used during rasterization; and
Normal filtering is performed, including adjusting the position of the vertex based on the estimated normal from the surface pixel.
16. The system of clause 15, wherein mapping the depth information to the luma channel comprises using M bits of the N available bits, wherein M is less than N.
17. The system of clause 16, wherein generating the depth scaling factor utilizes 2 remaining bits of the N available bits.
18. The system of clause 17, wherein the 2 remaining bits enable the data to be multiplied by 4, the last two bits of the data to be 0, and the most significant bit alignment to be used.
19. The system of clause 15, wherein the encoder is configured to apply the depth scaling factor to data, wherein the data comprises floating point values.
20. The system of clause 15, wherein the normal filtering includes performing a plane fit using points within the triangle.
21. The system of clause 15, wherein the normal filtering comprises using a one-, two-, or three-torus neighborhood to minimize the total normal angle.
The invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of the principles of construction and operation of the invention. Such references herein to specific embodiments and details thereof are not intended to limit the scope of the claims appended hereto. It will be apparent to those skilled in the art that other various modifications may be made in the embodiments chosen for illustration without departing from the spirit and scope of the invention as defined in the claims.

Claims (21)

1. A method of mesh geometry encoding, comprising:
Mapping depth information to a luma channel using less than all available bits;
Generating a depth scaling factor that is incorporated into a bilinear interpolation algorithm used during rasterization; and
Normal filtering is performed, including adjusting the position of the vertex based on the estimated normal from the surface pixel.
2. The method of claim 1, wherein mapping the depth information to the luma channel comprises using M bits of N available bits, wherein M is less than N.
3. The method of claim 2, wherein generating the depth scaling factor utilizes 2 remaining bits of the N available bits.
4. A method as claimed in claim 3, wherein 2 remaining bits enable multiplication of data by 4, enabling the last two bits of the data to be 0, and enabling the use of most significant bit alignment.
5. The method of claim 1, further comprising applying the depth scaling factor to data, wherein the data comprises floating point values.
6. The method of claim 1, wherein normal filtering comprises performing a plane fit using points within a triangle.
7. The method of claim 1, wherein normal filtering comprises using a one-, two-, or three-torus neighborhood to minimize an overall normal angle.
8. An apparatus, comprising:
a non-transitory memory for storing an application program for:
Mapping depth information to a luma channel using less than all available bits;
Generating a depth scaling factor that is incorporated into a bilinear interpolation algorithm used during rasterization; and
Performing normal filtering, including adjusting the position of the vertex based on the estimated normal from the surface pixel; and
A processor coupled to the memory, the processor configured to process the application.
9. The apparatus of claim 8, wherein mapping the depth information to the luma channel comprises using M bits of N available bits, wherein M is less than N.
10. The apparatus of claim 9, wherein generating the depth scaling factor utilizes 2 remaining bits of the N available bits.
11. The apparatus of claim 10, wherein 2 remaining bits enable data to be multiplied by 4, enable last two bits of the data to be 0, and enable most significant bit alignment to be used.
12. The apparatus of claim 8, wherein the application is configured to apply the depth scaling factor to data, wherein the data comprises a floating point value.
13. The apparatus of claim 8, wherein normal filtering comprises performing a plane fit using points within a triangle.
14. The apparatus of claim 8, wherein normal filtering comprises using a one-, two-, or three-torus neighborhood to minimize an overall normal angle.
15. A system, comprising:
One or more cameras for acquiring three-dimensional content;
An encoder for encoding the three-dimensional content:
Mapping depth information to a luma channel using less than all available bits;
Generating a depth scaling factor that is incorporated into a bilinear interpolation algorithm used during rasterization; and
Normal filtering is performed, including adjusting the position of the vertex based on the estimated normal from the surface pixel.
16. The system of claim 15, wherein mapping the depth information to the luma channel comprises using M bits of N available bits, wherein M is less than N.
17. The system of claim 16, wherein generating the depth scaling factor utilizes 2 remaining bits of the N available bits.
18. The system of claim 17, wherein 2 remaining bits enable data to be multiplied by 4, the last two bits of the data to be 0, and enable most significant bit alignment to be used.
19. The system of claim 15, wherein the encoder is configured to apply the depth scaling factor to data, wherein the data comprises floating point values.
20. The system of claim 15, wherein normal filtering comprises performing a plane fit using points within a triangle.
21. The system of claim 15, wherein normal filtering comprises using a one-, two-, or three-torus neighborhood to minimize an overall normal angle.
CN202380013402.2A 2022-03-25 2023-03-06 Grid geometry coding Pending CN118302794A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US63/269,915 2022-03-25
US17/987,828 2022-11-15

Publications (1)

Publication Number Publication Date
CN118302794A true CN118302794A (en) 2024-07-05

Family

ID=

Similar Documents

Publication Publication Date Title
CN112204618B (en) Method, apparatus and system for mapping 3D point cloud data into 2D surfaces
JP7303992B2 (en) Mesh compression via point cloud representation
US11044478B2 (en) Compression with multi-level encoding
CN112219222B (en) Method, apparatus and system for motion compensation of geometric representation of 3D data
EP2592596B1 (en) Compression of texture rendered wire mesh models
CN113557745A (en) Point cloud geometry filling
JP2023544618A (en) Video-based mesh compression
US20210211703A1 (en) Geometry information signaling for occluded points in an occupancy map video
JP7371691B2 (en) Point cloud encoding using homography transformation
JP3592168B2 (en) Image data encoding / decoding method and apparatus
US11418769B1 (en) Viewport adaptive volumetric content streaming and/or rendering
JP2023541271A (en) High density mesh compression
WO2021115466A1 (en) Point cloud data encoding method, point cloud data decoding method, storage medium and device
CN118302794A (en) Grid geometry coding
US20230306641A1 (en) Mesh geometry coding
WO2023180839A1 (en) Mesh geometry coding
US20230306643A1 (en) Mesh patch simplification
US20230306642A1 (en) Patch mesh connectivity coding
CN117751387A (en) Face mesh connectivity coding
US20230306683A1 (en) Mesh patch sub-division
US20240127489A1 (en) Efficient mapping coordinate creation and transmission
WO2023180840A1 (en) Patch mesh connectivity coding
WO2023180842A1 (en) Mesh patch simplification
WO2023180841A1 (en) Mesh patch sub-division
CN117795554A (en) Method for decoding and encoding 3D point cloud

Legal Events

Date Code Title Description
PB01 Publication